msm: ipa: initial commit of IPA driver

This is a snapshot of IPA from kernel msm-4.4 based on
commit ebc2a18351d4 ("msm: ipa: WA to get PA of sgt_tbl from wlan")

CRs-Fixed: 1077422
Change-Id: I97cf9ee9c104ac5ab5bc0577eb9413264b08a7a5
Signed-off-by: Amir Levy <alevy@codeaurora.org>
This commit is contained in:
Amir Levy
2016-10-27 18:08:27 +03:00
parent 8751c8965c
commit 9659e593c8
110 changed files with 114401 additions and 0 deletions

View File

@@ -0,0 +1,207 @@
Qualcomm technologies inc. Internet Packet Accelerator
Internet Packet Accelerator (IPA) is a programmable protocol
processor HW block. It is designed to support generic HW processing
of UL/DL IP packets for various use cases independent of radio technology.
Required properties:
IPA node:
- compatible : "qcom,ipa"
- reg: Specifies the base physical addresses and the sizes of the IPA
registers.
- reg-names: "ipa-base" - string to identify the IPA CORE base registers.
"bam-base" - string to identify the IPA BAM base registers.
"a2-bam-base" - string to identify the A2 BAM base registers.
- interrupts: Specifies the interrupt associated with IPA.
- interrupt-names: "ipa-irq" - string to identify the IPA core interrupt.
"bam-irq" - string to identify the IPA BAM interrupt.
"a2-bam-irq" - string to identify the A2 BAM interrupt.
- qcom,ipa-hw-ver: Specifies the IPA hardware version.
Optional:
- qcom,wan-rx-ring-size: size of WAN rx ring, default is 32
- qcom,arm-smmu: SMMU is present and ARM SMMU driver is used
- qcom,msm-smmu: SMMU is present and QSMMU driver is used
- qcom,smmu-s1-bypass: Boolean context flag to set SMMU to S1 bypass
- qcom,smmu-fast-map: Boolean context flag to set SMMU to fastpath mode
- ipa_smmu_ap: AP general purpose SMMU device
compatible "qcom,ipa-smmu-ap-cb"
- ipa_smmu_wlan: WDI SMMU device
compatible "qcom,ipa-smmu-wlan-cb"
- ipa_smmu_uc: uc SMMU device
compatible "qcom,ipa-smmu-uc-cb"
- qcom,smmu-disable-htw: boolean value to turn off SMMU page table caching
- qcom,use-a2-service: determine if A2 service will be used
- qcom,use-ipa-tethering-bridge: determine if tethering bridge will be used
- qcom,use-ipa-bamdma-a2-bridge: determine if a2/ipa hw bridge will be used
- qcom,ee: which EE is assigned to (non-secure) APPS from IPA-BAM POV. This
is a number
- qcom,ipa-hw-mode: IPA hardware mode - Normal, Virtual memory allocation,
memory allocation over a PCIe bridge
- qcom,msm-bus,name: String representing the client-name
- qcom,msm-bus,num-cases: Total number of usecases
- qcom,msm-bus,active-only: Boolean context flag for requests in active or
dual (active & sleep) contex
- qcom,msm-bus,num-paths: Total number of master-slave pairs
- qcom,msm-bus,vectors-KBps: Arrays of unsigned integers representing:
master-id, slave-id, arbitrated bandwidth
in KBps, instantaneous bandwidth in KBps
- qcom,ipa-bam-remote-mode: Boolean context flag to determine if ipa bam
is in remote mode.
- qcom,modem-cfg-emb-pipe-flt: Boolean context flag to determine if modem
configures embedded pipe filtering rules
- qcom,skip-uc-pipe-reset: Boolean context flag to indicate whether
a pipe reset via the IPA uC is required
- qcom,ipa-wdi2: Boolean context flag to indicate whether
using wdi-2.0 or not
- qcom,use-dma-zone: Boolean context flag to indicate whether memory
allocations controlled by IPA driver that do not
specify a struct device * should use GFP_DMA to
workaround IPA HW limitations
- qcom,use-gsi: Boolean context flag to indicate if the
transport protocol is GSI
- qcom,use-rg10-limitation-mitigation: Boolean context flag to activate
the mitigation to register group 10
AP access limitation
- qcom,do-not-use-ch-gsi-20: Boolean context flag to activate
software workaround for IPA limitation
to not use GSI physical channel 20
- qcom,tethered-flow-control: Boolean context flag to indicate whether
apps based flow control is needed for tethered
call.
IPA pipe sub nodes (A2 static pipes configurations):
-label: two labels are supported, a2-to-ipa and ipa-to-a2 which
supply static configuration for A2-IPA connection.
-qcom,src-bam-physical-address: The physical address of the source BAM
-qcom,ipa-bam-mem-type:The memory type:
0(Pipe memory), 1(Private memory), 2(System memory)
-qcom,src-bam-pipe-index: Source pipe index
-qcom,dst-bam-physical-address: The physical address of the
destination BAM
-qcom,dst-bam-pipe-index: Destination pipe index
-qcom,data-fifo-offset: Data fifo base offset
-qcom,data-fifo-size: Data fifo size (bytes)
-qcom,descriptor-fifo-offset: Descriptor fifo base offset
-qcom,descriptor-fifo-size: Descriptor fifo size (bytes)
Optional properties:
-qcom,ipa-pipe-mem: Specifies the base physical address and the
size of the IPA pipe memory region.
Pipe memory is a feature which may be supported by the
target (HW platform). The Driver support using pipe
memory instead of system memory. In case this property
will not appear in the IPA DTS entry, the driver will
use system memory.
- clocks: This property shall provide a list of entries each of which
contains a phandle to clock controller device and a macro that is
the clock's name in hardware.This should be "clock_rpm" as clock
controller phandle and "clk_ipa_clk" as macro for "iface_clk"
- clock-names: This property shall contain the clock input names used
by driver in same order as the clocks property.This should be "iface_clk"
IPA SMMU sub nodes
-compatible: "qcom,ipa-smmu-ap-cb" - represents the AP context bank.
-compatible: "qcom,ipa-smmu-wlan-cb" - represents IPA WLAN context bank.
-compatible: "qcom,ipa-smmu-uc-cb" - represents IPA uC context bank (for uC
offload scenarios).
- iommus : the phandle and stream IDs for the SMMU used by this root
- qcom,iova-mapping: specifies the start address and size of iova space.
IPA SMP2P sub nodes
-compatible: "qcom,smp2pgpio-map-ipa-1-out" - represents the out gpio from
ipa driver to modem.
-compatible: "qcom,smp2pgpio-map-ipa-1-in" - represents the in gpio to
ipa driver from modem.
-gpios: Binding to the gpio defined in XXX-smp2p.dtsi
Example:
qcom,ipa@fd4c0000 {
compatible = "qcom,ipa";
reg = <0xfd4c0000 0x26000>,
<0xfd4c4000 0x14818>;
<0xfc834000 0x7000>;
reg-names = "ipa-base", "bam-base"; "a2-bam-base";
interrupts = <0 252 0>,
<0 253 0>;
<0 29 1>;
interrupt-names = "ipa-irq", "bam-irq"; "a2-bam-irq";
qcom,ipa-hw-ver = <1>;
clocks = <&clock_rpm clk_ipa_clk>;
clock-names = "iface_clk";
qcom,msm-bus,name = "ipa";
qcom,msm-bus,num-cases = <3>;
qcom,msm-bus,num-paths = <2>;
qcom,msm-bus,vectors-KBps =
<90 512 0 0>, <90 585 0 0>, /* No vote */
<90 512 100000 800000>, <90 585 100000 800000>, /* SVS */
<90 512 100000 1200000>, <90 585 100000 1200000>; /* PERF */
qcom,bus-vector-names = "MIN", "SVS", "PERF";
qcom,pipe1 {
label = "a2-to-ipa";
qcom,src-bam-physical-address = <0xfc834000>;
qcom,ipa-bam-mem-type = <0>;
qcom,src-bam-pipe-index = <1>;
qcom,dst-bam-physical-address = <0xfd4c0000>;
qcom,dst-bam-pipe-index = <6>;
qcom,data-fifo-offset = <0x1000>;
qcom,data-fifo-size = <0xd00>;
qcom,descriptor-fifo-offset = <0x1d00>;
qcom,descriptor-fifo-size = <0x300>;
};
qcom,pipe2 {
label = "ipa-to-a2";
qcom,src-bam-physical-address = <0xfd4c0000>;
qcom,ipa-bam-mem-type = <0>;
qcom,src-bam-pipe-index = <7>;
qcom,dst-bam-physical-address = <0xfc834000>;
qcom,dst-bam-pipe-index = <0>;
qcom,data-fifo-offset = <0x00>;
qcom,data-fifo-size = <0xd00>;
qcom,descriptor-fifo-offset = <0xd00>;
qcom,descriptor-fifo-size = <0x300>;
};
/* smp2p gpio information */
qcom,smp2pgpio_map_ipa_1_out {
compatible = "qcom,smp2pgpio-map-ipa-1-out";
gpios = <&smp2pgpio_ipa_1_out 0 0>;
};
qcom,smp2pgpio_map_ipa_1_in {
compatible = "qcom,smp2pgpio-map-ipa-1-in";
gpios = <&smp2pgpio_ipa_1_in 0 0>;
};
ipa_smmu_ap: ipa_smmu_ap {
compatible = "qcom,ipa-smmu-ap-cb";
iommus = <&anoc2_smmu 0x30>;
qcom,iova-mapping = <0x10000000 0x40000000>;
};
ipa_smmu_wlan: ipa_smmu_wlan {
compatible = "qcom,ipa-smmu-wlan-cb";
iommus = <&anoc2_smmu 0x31>;
};
ipa_smmu_uc: ipa_smmu_uc {
compatible = "qcom,ipa-smmu-uc-cb";
iommus = <&anoc2_smmu 0x32>;
qcom,iova-mapping = <0x40000000 0x20000000>;
};
};

View File

@@ -0,0 +1,18 @@
* Qualcomm Technologies, Inc. RmNet IPA driver module
This module enables embedded data calls using IPA HW.
Required properties:
- compatible: Must be "qcom,rmnet-ipa"
Optional:
- qcom,rmnet-ipa-ssr: determine if modem SSR is supported
- qcom,ipa-loaduC: indicate that ipa uC should be loaded
- qcom,ipa-advertise-sg-support: determine how to respond to a query
regarding scatter-gather capability
Example:
qcom,rmnet-ipa {
compatible = "qcom,rmnet-ipa";
}

View File

@@ -0,0 +1,18 @@
* Qualcomm Technologies, Inc. RmNet IPA driver module
This module enables embedded data calls using IPA v3 HW.
Required properties:
- compatible: Must be "qcom,rmnet-ipa3"
Optional:
- qcom,rmnet-ipa-ssr: determine if modem SSR is supported
- qcom,ipa-loaduC: indicate that ipa uC should be loaded
- qcom,ipa-advertise-sg-support: determine how to respond to a query
regarding scatter-gather capability
Example:
qcom,rmnet-ipa3 {
compatible = "qcom,rmnet-ipa3";
}

View File

@@ -8,3 +8,5 @@ endif
source "drivers/platform/goldfish/Kconfig"
source "drivers/platform/chrome/Kconfig"
source "drivers/platform/msm/Kconfig"

View File

@@ -7,3 +7,4 @@ obj-$(CONFIG_MIPS) += mips/
obj-$(CONFIG_OLPC) += olpc/
obj-$(CONFIG_GOLDFISH) += goldfish/
obj-$(CONFIG_CHROME_PLATFORMS) += chrome/
obj-$(CONFIG_ARCH_QCOM) += msm/

View File

@@ -0,0 +1,68 @@
menu "Qualcomm technologies inc. MSM specific device drivers"
depends on ARCH_QCOM
config IPA
tristate "IPA support"
depends on SPS && NET
help
This driver supports the Internet Packet Accelerator (IPA) core.
IPA is a programmable protocol processor HW block.
It is designed to support generic HW processing of UL/DL IP packets
for various use cases independent of radio technology.
The driver support client connection and configuration
for the IPA core.
Kernel and user-space processes can call the IPA driver
to configure IPA core.
config RMNET_IPA
tristate "IPA RMNET WWAN Network Device"
depends on IPA && MSM_QMI_INTERFACE
help
This WWAN Network Driver implements network stack class device.
It supports Embedded data transfer from A7 to Q6. Configures IPA HW
for RmNet Data Driver and also exchange of QMI messages between
A7 and Q6 IPA-driver.
config GSI
bool "GSI support"
help
This driver provides the transport needed to talk to the
IPA core. It replaces the BAM transport used previously.
The GSI connects to a peripheral component via uniform TLV
interface, and allows it to interface with other peripherals
and CPUs over various types of interfaces such as MHI, xDCI,
xHCI, GPI, WDI, Ethernet, etc.
config IPA3
tristate "IPA3 support"
depends on GSI && NET
help
This driver supports the Internet Packet Accelerator (IPA3) core.
IPA is a programmable protocol processor HW block.
It is designed to support generic HW processing of UL/DL IP packets
for various use cases independent of radio technology.
The driver support client connection and configuration
for the IPA core.
Kernel and user-space processes can call the IPA driver
to configure IPA core.
config RMNET_IPA3
tristate "IPA3 RMNET WWAN Network Device"
depends on IPA3 && MSM_QMI_INTERFACE
help
This WWAN Network Driver implements network stack class device.
It supports Embedded data transfer from A7 to Q6. Configures IPA HW
for RmNet Data Driver and also exchange of QMI messages between
A7 and Q6 IPA-driver.
config IPA_UT
tristate "IPA Unit-Test Framework and Test Suites"
depends on IPA3 && DEBUG_FS
help
This Module implements IPA in-kernel test framework.
The framework supports defining and running tests, grouped
into suites according to the sub-unit of the IPA being tested.
The user interface to run and control the tests is debugfs file
system.
endmenu

View File

@@ -0,0 +1,6 @@
#
# Makefile for the MSM specific device drivers.
#
obj-$(CONFIG_GSI) += gsi/
obj-$(CONFIG_IPA) += ipa/
obj-$(CONFIG_IPA3) += ipa/

View File

@@ -0,0 +1,5 @@
obj-$(CONFIG_IPA) += ipa_v2/ ipa_clients/ ipa_common
obj-$(CONFIG_IPA3) += ipa_v3/ ipa_clients/ ipa_common
obj-$(CONFIG_IPA_UT) += test/
ipa_common += ipa_api.o ipa_rm.o ipa_rm_dependency_graph.o ipa_rm_peers_list.o ipa_rm_resource.o ipa_rm_inactivity_timer.o

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,400 @@
/* Copyright (c) 2015-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/ipa_mhi.h>
#include <linux/ipa_uc_offload.h>
#include "ipa_common_i.h"
#ifndef _IPA_API_H_
#define _IPA_API_H_
struct ipa_api_controller {
int (*ipa_connect)(const struct ipa_connect_params *in,
struct ipa_sps_params *sps, u32 *clnt_hdl);
int (*ipa_disconnect)(u32 clnt_hdl);
int (*ipa_reset_endpoint)(u32 clnt_hdl);
int (*ipa_clear_endpoint_delay)(u32 clnt_hdl);
int (*ipa_disable_endpoint)(u32 clnt_hdl);
int (*ipa_cfg_ep)(u32 clnt_hdl, const struct ipa_ep_cfg *ipa_ep_cfg);
int (*ipa_cfg_ep_nat)(u32 clnt_hdl,
const struct ipa_ep_cfg_nat *ipa_ep_cfg);
int (*ipa_cfg_ep_hdr)(u32 clnt_hdl,
const struct ipa_ep_cfg_hdr *ipa_ep_cfg);
int (*ipa_cfg_ep_hdr_ext)(u32 clnt_hdl,
const struct ipa_ep_cfg_hdr_ext *ipa_ep_cfg);
int (*ipa_cfg_ep_mode)(u32 clnt_hdl,
const struct ipa_ep_cfg_mode *ipa_ep_cfg);
int (*ipa_cfg_ep_aggr)(u32 clnt_hdl,
const struct ipa_ep_cfg_aggr *ipa_ep_cfg);
int (*ipa_cfg_ep_deaggr)(u32 clnt_hdl,
const struct ipa_ep_cfg_deaggr *ipa_ep_cfg);
int (*ipa_cfg_ep_route)(u32 clnt_hdl,
const struct ipa_ep_cfg_route *ipa_ep_cfg);
int (*ipa_cfg_ep_holb)(u32 clnt_hdl,
const struct ipa_ep_cfg_holb *ipa_ep_cfg);
int (*ipa_cfg_ep_cfg)(u32 clnt_hdl,
const struct ipa_ep_cfg_cfg *ipa_ep_cfg);
int (*ipa_cfg_ep_metadata_mask)(u32 clnt_hdl,
const struct ipa_ep_cfg_metadata_mask *ipa_ep_cfg);
int (*ipa_cfg_ep_holb_by_client)(enum ipa_client_type client,
const struct ipa_ep_cfg_holb *ipa_ep_cfg);
int (*ipa_cfg_ep_ctrl)(u32 clnt_hdl,
const struct ipa_ep_cfg_ctrl *ep_ctrl);
int (*ipa_add_hdr)(struct ipa_ioc_add_hdr *hdrs);
int (*ipa_del_hdr)(struct ipa_ioc_del_hdr *hdls);
int (*ipa_commit_hdr)(void);
int (*ipa_reset_hdr)(void);
int (*ipa_get_hdr)(struct ipa_ioc_get_hdr *lookup);
int (*ipa_put_hdr)(u32 hdr_hdl);
int (*ipa_copy_hdr)(struct ipa_ioc_copy_hdr *copy);
int (*ipa_add_hdr_proc_ctx)(struct ipa_ioc_add_hdr_proc_ctx *proc_ctxs);
int (*ipa_del_hdr_proc_ctx)(struct ipa_ioc_del_hdr_proc_ctx *hdls);
int (*ipa_add_rt_rule)(struct ipa_ioc_add_rt_rule *rules);
int (*ipa_del_rt_rule)(struct ipa_ioc_del_rt_rule *hdls);
int (*ipa_commit_rt)(enum ipa_ip_type ip);
int (*ipa_reset_rt)(enum ipa_ip_type ip);
int (*ipa_get_rt_tbl)(struct ipa_ioc_get_rt_tbl *lookup);
int (*ipa_put_rt_tbl)(u32 rt_tbl_hdl);
int (*ipa_query_rt_index)(struct ipa_ioc_get_rt_tbl_indx *in);
int (*ipa_mdfy_rt_rule)(struct ipa_ioc_mdfy_rt_rule *rules);
int (*ipa_add_flt_rule)(struct ipa_ioc_add_flt_rule *rules);
int (*ipa_del_flt_rule)(struct ipa_ioc_del_flt_rule *hdls);
int (*ipa_mdfy_flt_rule)(struct ipa_ioc_mdfy_flt_rule *rules);
int (*ipa_commit_flt)(enum ipa_ip_type ip);
int (*ipa_reset_flt)(enum ipa_ip_type ip);
int (*allocate_nat_device)(struct ipa_ioc_nat_alloc_mem *mem);
int (*ipa_nat_init_cmd)(struct ipa_ioc_v4_nat_init *init);
int (*ipa_nat_dma_cmd)(struct ipa_ioc_nat_dma_cmd *dma);
int (*ipa_nat_del_cmd)(struct ipa_ioc_v4_nat_del *del);
int (*ipa_send_msg)(struct ipa_msg_meta *meta, void *buff,
ipa_msg_free_fn callback);
int (*ipa_register_pull_msg)(struct ipa_msg_meta *meta,
ipa_msg_pull_fn callback);
int (*ipa_deregister_pull_msg)(struct ipa_msg_meta *meta);
int (*ipa_register_intf)(const char *name,
const struct ipa_tx_intf *tx,
const struct ipa_rx_intf *rx);
int (*ipa_register_intf_ext)(const char *name,
const struct ipa_tx_intf *tx,
const struct ipa_rx_intf *rx,
const struct ipa_ext_intf *ext);
int (*ipa_deregister_intf)(const char *name);
int (*ipa_set_aggr_mode)(enum ipa_aggr_mode mode);
int (*ipa_set_qcncm_ndp_sig)(char sig[3]);
int (*ipa_set_single_ndp_per_mbim)(bool enable);
int (*ipa_tx_dp)(enum ipa_client_type dst, struct sk_buff *skb,
struct ipa_tx_meta *metadata);
int (*ipa_tx_dp_mul)(enum ipa_client_type dst,
struct ipa_tx_data_desc *data_desc);
void (*ipa_free_skb)(struct ipa_rx_data *);
int (*ipa_setup_sys_pipe)(struct ipa_sys_connect_params *sys_in,
u32 *clnt_hdl);
int (*ipa_teardown_sys_pipe)(u32 clnt_hdl);
int (*ipa_sys_setup)(struct ipa_sys_connect_params *sys_in,
unsigned long *ipa_bam_hdl,
u32 *ipa_pipe_num, u32 *clnt_hdl, bool en_status);
int (*ipa_sys_teardown)(u32 clnt_hdl);
int (*ipa_sys_update_gsi_hdls)(u32 clnt_hdl, unsigned long gsi_ch_hdl,
unsigned long gsi_ev_hdl);
int (*ipa_connect_wdi_pipe)(struct ipa_wdi_in_params *in,
struct ipa_wdi_out_params *out);
int (*ipa_disconnect_wdi_pipe)(u32 clnt_hdl);
int (*ipa_enable_wdi_pipe)(u32 clnt_hdl);
int (*ipa_disable_wdi_pipe)(u32 clnt_hdl);
int (*ipa_resume_wdi_pipe)(u32 clnt_hdl);
int (*ipa_suspend_wdi_pipe)(u32 clnt_hdl);
int (*ipa_get_wdi_stats)(struct IpaHwStatsWDIInfoData_t *stats);
u16 (*ipa_get_smem_restr_bytes)(void);
int (*ipa_uc_wdi_get_dbpa)(struct ipa_wdi_db_params *out);
int (*ipa_uc_reg_rdyCB)(struct ipa_wdi_uc_ready_params *param);
int (*ipa_uc_dereg_rdyCB)(void);
int (*teth_bridge_init)(struct teth_bridge_init_params *params);
int (*teth_bridge_disconnect)(enum ipa_client_type client);
int (*teth_bridge_connect)(
struct teth_bridge_connect_params *connect_params);
void (*ipa_set_client)(
int index, enum ipacm_client_enum client, bool uplink);
enum ipacm_client_enum (*ipa_get_client)(int pipe_idx);
bool (*ipa_get_client_uplink)(int pipe_idx);
int (*ipa_dma_init)(void);
int (*ipa_dma_enable)(void);
int (*ipa_dma_disable)(void);
int (*ipa_dma_sync_memcpy)(u64 dest, u64 src, int len);
int (*ipa_dma_async_memcpy)(u64 dest, u64 src, int len,
void (*user_cb)(void *user1), void *user_param);
int (*ipa_dma_uc_memcpy)(phys_addr_t dest, phys_addr_t src, int len);
void (*ipa_dma_destroy)(void);
bool (*ipa_has_open_aggr_frame)(enum ipa_client_type client);
int (*ipa_generate_tag_process)(void);
int (*ipa_disable_sps_pipe)(enum ipa_client_type client);
void (*ipa_set_tag_process_before_gating)(bool val);
int (*ipa_mhi_init_engine)(struct ipa_mhi_init_engine *params);
int (*ipa_connect_mhi_pipe)(struct ipa_mhi_connect_params_internal *in,
u32 *clnt_hdl);
int (*ipa_disconnect_mhi_pipe)(u32 clnt_hdl);
bool (*ipa_mhi_stop_gsi_channel)(enum ipa_client_type client);
int (*ipa_qmi_disable_force_clear)(u32 request_id);
int (*ipa_qmi_enable_force_clear_datapath_send)(
struct ipa_enable_force_clear_datapath_req_msg_v01 *req);
int (*ipa_qmi_disable_force_clear_datapath_send)(
struct ipa_disable_force_clear_datapath_req_msg_v01 *req);
bool (*ipa_mhi_sps_channel_empty)(enum ipa_client_type client);
int (*ipa_mhi_reset_channel_internal)(enum ipa_client_type client);
int (*ipa_mhi_start_channel_internal)(enum ipa_client_type client);
void (*ipa_get_holb)(int ep_idx, struct ipa_ep_cfg_holb *holb);
int (*ipa_mhi_query_ch_info)(enum ipa_client_type client,
struct gsi_chan_info *ch_info);
int (*ipa_mhi_resume_channels_internal)(
enum ipa_client_type client,
bool LPTransitionRejected,
bool brstmode_enabled,
union __packed gsi_channel_scratch ch_scratch,
u8 index);
int (*ipa_mhi_destroy_channel)(enum ipa_client_type client);
int (*ipa_uc_mhi_send_dl_ul_sync_info)
(union IpaHwMhiDlUlSyncCmdData_t *cmd);
int (*ipa_uc_mhi_init)
(void (*ready_cb)(void), void (*wakeup_request_cb)(void));
void (*ipa_uc_mhi_cleanup)(void);
int (*ipa_uc_mhi_print_stats)(char *dbg_buff, int size);
int (*ipa_uc_mhi_reset_channel)(int channelHandle);
int (*ipa_uc_mhi_suspend_channel)(int channelHandle);
int (*ipa_uc_mhi_stop_event_update_channel)(int channelHandle);
int (*ipa_uc_state_check)(void);
int (*ipa_write_qmap_id)(struct ipa_ioc_write_qmapid *param_in);
int (*ipa_add_interrupt_handler)(enum ipa_irq_type interrupt,
ipa_irq_handler_t handler,
bool deferred_flag,
void *private_data);
int (*ipa_remove_interrupt_handler)(enum ipa_irq_type interrupt);
int (*ipa_restore_suspend_handler)(void);
void (*ipa_bam_reg_dump)(void);
int (*ipa_get_ep_mapping)(enum ipa_client_type client);
bool (*ipa_is_ready)(void);
void (*ipa_proxy_clk_vote)(void);
void (*ipa_proxy_clk_unvote)(void);
bool (*ipa_is_client_handle_valid)(u32 clnt_hdl);
enum ipa_client_type (*ipa_get_client_mapping)(int pipe_idx);
enum ipa_rm_resource_name (*ipa_get_rm_resource_from_ep)(int pipe_idx);
bool (*ipa_get_modem_cfg_emb_pipe_flt)(void);
enum ipa_transport_type (*ipa_get_transport_type)(void);
int (*ipa_ap_suspend)(struct device *dev);
int (*ipa_ap_resume)(struct device *dev);
int (*ipa_stop_gsi_channel)(u32 clnt_hdl);
struct iommu_domain *(*ipa_get_smmu_domain)(void);
int (*ipa_disable_apps_wan_cons_deaggr)(uint32_t agg_size,
uint32_t agg_count);
struct device *(*ipa_get_dma_dev)(void);
int (*ipa_release_wdi_mapping)(u32 num_buffers,
struct ipa_wdi_buffer_info *info);
int (*ipa_create_wdi_mapping)(u32 num_buffers,
struct ipa_wdi_buffer_info *info);
struct ipa_gsi_ep_config *(*ipa_get_gsi_ep_info)(int ipa_ep_idx);
int (*ipa_register_ipa_ready_cb)(void (*ipa_ready_cb)(void *user_data),
void *user_data);
void (*ipa_inc_client_enable_clks)(
struct ipa_active_client_logging_info *id);
void (*ipa_dec_client_disable_clks)(
struct ipa_active_client_logging_info *id);
int (*ipa_inc_client_enable_clks_no_block)(
struct ipa_active_client_logging_info *id);
int (*ipa_suspend_resource_no_block)(
enum ipa_rm_resource_name resource);
int (*ipa_resume_resource)(enum ipa_rm_resource_name name);
int (*ipa_suspend_resource_sync)(enum ipa_rm_resource_name resource);
int (*ipa_set_required_perf_profile)(
enum ipa_voltage_level floor_voltage, u32 bandwidth_mbps);
void *(*ipa_get_ipc_logbuf)(void);
void *(*ipa_get_ipc_logbuf_low)(void);
int (*ipa_rx_poll)(u32 clnt_hdl, int budget);
void (*ipa_recycle_wan_skb)(struct sk_buff *skb);
int (*ipa_setup_uc_ntn_pipes)(struct ipa_ntn_conn_in_params *in,
ipa_notify_cb notify, void *priv, u8 hdr_len,
struct ipa_ntn_conn_out_params *);
int (*ipa_tear_down_uc_offload_pipes)(int ipa_ep_idx_ul,
int ipa_ep_idx_dl);
};
#ifdef CONFIG_IPA
int ipa_plat_drv_probe(struct platform_device *pdev_p,
struct ipa_api_controller *api_ctrl,
const struct of_device_id *pdrv_match);
#else
static inline int ipa_plat_drv_probe(struct platform_device *pdev_p,
struct ipa_api_controller *api_ctrl,
const struct of_device_id *pdrv_match)
{
return -ENODEV;
}
#endif /* (CONFIG_IPA) */
#ifdef CONFIG_IPA3
int ipa3_plat_drv_probe(struct platform_device *pdev_p,
struct ipa_api_controller *api_ctrl,
const struct of_device_id *pdrv_match);
#else
static inline int ipa3_plat_drv_probe(struct platform_device *pdev_p,
struct ipa_api_controller *api_ctrl,
const struct of_device_id *pdrv_match)
{
return -ENODEV;
}
#endif /* (CONFIG_IPA3) */
#endif /* _IPA_API_H_ */

View File

@@ -0,0 +1,2 @@
obj-$(CONFIG_IPA3) += ipa_usb.o odu_bridge.o ipa_mhi_client.o ipa_uc_offload.o
obj-$(CONFIG_IPA) += odu_bridge.o ipa_mhi_client.o ipa_uc_offload.o

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,597 @@
/* Copyright (c) 2015, 2016 The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/ipa_uc_offload.h>
#include <linux/msm_ipa.h>
#include "../ipa_common_i.h"
#define IPA_NTN_DMA_POOL_ALIGNMENT 8
#define OFFLOAD_DRV_NAME "ipa_uc_offload"
#define IPA_UC_OFFLOAD_DBG(fmt, args...) \
do { \
pr_debug(OFFLOAD_DRV_NAME " %s:%d " fmt, \
__func__, __LINE__, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf(), \
OFFLOAD_DRV_NAME " %s:%d " fmt, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
OFFLOAD_DRV_NAME " %s:%d " fmt, ## args); \
} while (0)
#define IPA_UC_OFFLOAD_LOW(fmt, args...) \
do { \
pr_debug(OFFLOAD_DRV_NAME " %s:%d " fmt, \
__func__, __LINE__, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
OFFLOAD_DRV_NAME " %s:%d " fmt, ## args); \
} while (0)
#define IPA_UC_OFFLOAD_ERR(fmt, args...) \
do { \
pr_err(OFFLOAD_DRV_NAME " %s:%d " fmt, \
__func__, __LINE__, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf(), \
OFFLOAD_DRV_NAME " %s:%d " fmt, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
OFFLOAD_DRV_NAME " %s:%d " fmt, ## args); \
} while (0)
#define IPA_UC_OFFLOAD_INFO(fmt, args...) \
do { \
pr_info(OFFLOAD_DRV_NAME " %s:%d " fmt, \
__func__, __LINE__, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf(), \
OFFLOAD_DRV_NAME " %s:%d " fmt, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
OFFLOAD_DRV_NAME " %s:%d " fmt, ## args); \
} while (0)
enum ipa_uc_offload_state {
IPA_UC_OFFLOAD_STATE_INVALID,
IPA_UC_OFFLOAD_STATE_INITIALIZED,
IPA_UC_OFFLOAD_STATE_UP,
IPA_UC_OFFLOAD_STATE_DOWN,
};
struct ipa_uc_offload_ctx {
enum ipa_uc_offload_proto proto;
enum ipa_uc_offload_state state;
void *priv;
u8 hdr_len;
u32 partial_hdr_hdl[IPA_IP_MAX];
char netdev_name[IPA_RESOURCE_NAME_MAX];
ipa_notify_cb notify;
struct completion ntn_completion;
};
static struct ipa_uc_offload_ctx *ipa_uc_offload_ctx[IPA_UC_MAX_PROT_SIZE];
static int ipa_commit_partial_hdr(
struct ipa_ioc_add_hdr *hdr,
const char *netdev_name,
struct ipa_hdr_info *hdr_info)
{
int i;
if (hdr == NULL || hdr_info == NULL) {
IPA_UC_OFFLOAD_ERR("Invalid input\n");
return -EINVAL;
}
hdr->commit = 1;
hdr->num_hdrs = 2;
snprintf(hdr->hdr[0].name, sizeof(hdr->hdr[0].name),
"%s_ipv4", netdev_name);
snprintf(hdr->hdr[1].name, sizeof(hdr->hdr[1].name),
"%s_ipv6", netdev_name);
for (i = IPA_IP_v4; i < IPA_IP_MAX; i++) {
hdr->hdr[i].hdr_len = hdr_info[i].hdr_len;
memcpy(hdr->hdr[i].hdr, hdr_info[i].hdr, hdr->hdr[i].hdr_len);
hdr->hdr[i].type = hdr_info[i].hdr_type;
hdr->hdr[i].is_partial = 1;
hdr->hdr[i].is_eth2_ofst_valid = 1;
hdr->hdr[i].eth2_ofst = hdr_info[i].dst_mac_addr_offset;
}
if (ipa_add_hdr(hdr)) {
IPA_UC_OFFLOAD_ERR("fail to add partial headers\n");
return -EFAULT;
}
return 0;
}
static int ipa_uc_offload_ntn_reg_intf(
struct ipa_uc_offload_intf_params *inp,
struct ipa_uc_offload_out_params *outp,
struct ipa_uc_offload_ctx *ntn_ctx)
{
struct ipa_ioc_add_hdr *hdr;
struct ipa_tx_intf tx;
struct ipa_rx_intf rx;
struct ipa_ioc_tx_intf_prop tx_prop[2];
struct ipa_ioc_rx_intf_prop rx_prop[2];
u32 len;
int ret = 0;
IPA_UC_OFFLOAD_DBG("register interface for netdev %s\n",
inp->netdev_name);
memcpy(ntn_ctx->netdev_name, inp->netdev_name, IPA_RESOURCE_NAME_MAX);
ntn_ctx->hdr_len = inp->hdr_info[0].hdr_len;
ntn_ctx->notify = inp->notify;
ntn_ctx->priv = inp->priv;
/* add partial header */
len = sizeof(struct ipa_ioc_add_hdr) + 2 * sizeof(struct ipa_hdr_add);
hdr = kzalloc(len, GFP_KERNEL);
if (hdr == NULL) {
IPA_UC_OFFLOAD_ERR("fail to alloc %d bytes\n", len);
return -ENOMEM;
}
if (ipa_commit_partial_hdr(hdr, ntn_ctx->netdev_name, inp->hdr_info)) {
IPA_UC_OFFLOAD_ERR("fail to commit partial headers\n");
ret = -EFAULT;
goto fail;
}
/* populate tx prop */
tx.num_props = 2;
tx.prop = tx_prop;
memset(tx_prop, 0, sizeof(tx_prop));
tx_prop[0].ip = IPA_IP_v4;
tx_prop[0].dst_pipe = IPA_CLIENT_ODU_TETH_CONS;
tx_prop[0].hdr_l2_type = inp->hdr_info[0].hdr_type;
memcpy(tx_prop[0].hdr_name, hdr->hdr[IPA_IP_v4].name,
sizeof(tx_prop[0].hdr_name));
tx_prop[1].ip = IPA_IP_v6;
tx_prop[1].dst_pipe = IPA_CLIENT_ODU_TETH_CONS;
tx_prop[1].hdr_l2_type = inp->hdr_info[1].hdr_type;
memcpy(tx_prop[1].hdr_name, hdr->hdr[IPA_IP_v6].name,
sizeof(tx_prop[1].hdr_name));
/* populate rx prop */
rx.num_props = 2;
rx.prop = rx_prop;
memset(rx_prop, 0, sizeof(rx_prop));
rx_prop[0].ip = IPA_IP_v4;
rx_prop[0].src_pipe = IPA_CLIENT_ODU_PROD;
rx_prop[0].hdr_l2_type = inp->hdr_info[0].hdr_type;
if (inp->is_meta_data_valid) {
rx_prop[0].attrib.attrib_mask |= IPA_FLT_META_DATA;
rx_prop[0].attrib.meta_data = inp->meta_data;
rx_prop[0].attrib.meta_data_mask = inp->meta_data_mask;
}
rx_prop[1].ip = IPA_IP_v6;
rx_prop[1].src_pipe = IPA_CLIENT_ODU_PROD;
rx_prop[1].hdr_l2_type = inp->hdr_info[1].hdr_type;
if (inp->is_meta_data_valid) {
rx_prop[1].attrib.attrib_mask |= IPA_FLT_META_DATA;
rx_prop[1].attrib.meta_data = inp->meta_data;
rx_prop[1].attrib.meta_data_mask = inp->meta_data_mask;
}
if (ipa_register_intf(inp->netdev_name, &tx, &rx)) {
IPA_UC_OFFLOAD_ERR("fail to add interface prop\n");
memset(ntn_ctx, 0, sizeof(*ntn_ctx));
ret = -EFAULT;
goto fail;
}
ntn_ctx->partial_hdr_hdl[IPA_IP_v4] = hdr->hdr[IPA_IP_v4].hdr_hdl;
ntn_ctx->partial_hdr_hdl[IPA_IP_v6] = hdr->hdr[IPA_IP_v6].hdr_hdl;
init_completion(&ntn_ctx->ntn_completion);
ntn_ctx->state = IPA_UC_OFFLOAD_STATE_INITIALIZED;
fail:
kfree(hdr);
return ret;
}
int ipa_uc_offload_reg_intf(
struct ipa_uc_offload_intf_params *inp,
struct ipa_uc_offload_out_params *outp)
{
struct ipa_uc_offload_ctx *ctx;
int ret = 0;
if (inp == NULL || outp == NULL) {
IPA_UC_OFFLOAD_ERR("invalid params in=%p out=%p\n", inp, outp);
return -EINVAL;
}
if (inp->proto <= IPA_UC_INVALID ||
inp->proto >= IPA_UC_MAX_PROT_SIZE) {
IPA_UC_OFFLOAD_ERR("invalid proto %d\n", inp->proto);
return -EINVAL;
}
if (!ipa_uc_offload_ctx[inp->proto]) {
ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
if (ctx == NULL) {
IPA_UC_OFFLOAD_ERR("fail to alloc uc offload ctx\n");
return -EFAULT;
}
ipa_uc_offload_ctx[inp->proto] = ctx;
ctx->proto = inp->proto;
} else
ctx = ipa_uc_offload_ctx[inp->proto];
if (ctx->state != IPA_UC_OFFLOAD_STATE_INVALID) {
IPA_UC_OFFLOAD_ERR("Already Initialized\n");
return -EINVAL;
}
if (ctx->proto == IPA_UC_NTN) {
ret = ipa_uc_offload_ntn_reg_intf(inp, outp, ctx);
if (!ret)
outp->clnt_hndl = IPA_UC_NTN;
}
return ret;
}
EXPORT_SYMBOL(ipa_uc_offload_reg_intf);
static int ipa_uc_ntn_cons_release(void)
{
return 0;
}
static int ipa_uc_ntn_cons_request(void)
{
int ret = 0;
struct ipa_uc_offload_ctx *ntn_ctx;
ntn_ctx = ipa_uc_offload_ctx[IPA_UC_NTN];
if (!ntn_ctx) {
IPA_UC_OFFLOAD_ERR("NTN is not initialized\n");
ret = -EFAULT;
} else if (ntn_ctx->state != IPA_UC_OFFLOAD_STATE_UP) {
IPA_UC_OFFLOAD_ERR("Invalid State: %d\n", ntn_ctx->state);
ret = -EFAULT;
}
return ret;
}
static void ipa_uc_offload_rm_notify(void *user_data, enum ipa_rm_event event,
unsigned long data)
{
struct ipa_uc_offload_ctx *offload_ctx;
offload_ctx = (struct ipa_uc_offload_ctx *)user_data;
if (!(offload_ctx && offload_ctx->proto > IPA_UC_INVALID &&
offload_ctx->proto < IPA_UC_MAX_PROT_SIZE)) {
IPA_UC_OFFLOAD_ERR("Invalid user data\n");
return;
}
if (offload_ctx->state != IPA_UC_OFFLOAD_STATE_INITIALIZED)
IPA_UC_OFFLOAD_ERR("Invalid State: %d\n", offload_ctx->state);
switch (event) {
case IPA_RM_RESOURCE_GRANTED:
complete_all(&offload_ctx->ntn_completion);
break;
case IPA_RM_RESOURCE_RELEASED:
break;
default:
IPA_UC_OFFLOAD_ERR("Invalid RM Evt: %d", event);
break;
}
}
int ipa_uc_ntn_conn_pipes(struct ipa_ntn_conn_in_params *inp,
struct ipa_ntn_conn_out_params *outp,
struct ipa_uc_offload_ctx *ntn_ctx)
{
struct ipa_rm_create_params param;
int result = 0;
if (inp->dl.ring_base_pa % IPA_NTN_DMA_POOL_ALIGNMENT ||
inp->dl.buff_pool_base_pa % IPA_NTN_DMA_POOL_ALIGNMENT) {
IPA_UC_OFFLOAD_ERR("alignment failure on TX\n");
return -EINVAL;
}
if (inp->ul.ring_base_pa % IPA_NTN_DMA_POOL_ALIGNMENT ||
inp->ul.buff_pool_base_pa % IPA_NTN_DMA_POOL_ALIGNMENT) {
IPA_UC_OFFLOAD_ERR("alignment failure on RX\n");
return -EINVAL;
}
memset(&param, 0, sizeof(param));
param.name = IPA_RM_RESOURCE_ODU_ADAPT_PROD;
param.reg_params.user_data = ntn_ctx;
param.reg_params.notify_cb = ipa_uc_offload_rm_notify;
param.floor_voltage = IPA_VOLTAGE_SVS;
result = ipa_rm_create_resource(&param);
if (result) {
IPA_UC_OFFLOAD_ERR("fail to create ODU_ADAPT_PROD resource\n");
return -EFAULT;
}
memset(&param, 0, sizeof(param));
param.name = IPA_RM_RESOURCE_ODU_ADAPT_CONS;
param.request_resource = ipa_uc_ntn_cons_request;
param.release_resource = ipa_uc_ntn_cons_release;
result = ipa_rm_create_resource(&param);
if (result) {
IPA_UC_OFFLOAD_ERR("fail to create ODU_ADAPT_CONS resource\n");
goto fail_create_rm_cons;
}
if (ipa_rm_add_dependency(IPA_RM_RESOURCE_ODU_ADAPT_PROD,
IPA_RM_RESOURCE_APPS_CONS)) {
IPA_UC_OFFLOAD_ERR("fail to add rm dependency\n");
result = -EFAULT;
goto fail;
}
if (ipa_setup_uc_ntn_pipes(inp, ntn_ctx->notify,
ntn_ctx->priv, ntn_ctx->hdr_len, outp)) {
IPA_UC_OFFLOAD_ERR("fail to setup uc offload pipes\n");
result = -EFAULT;
goto fail;
}
ntn_ctx->state = IPA_UC_OFFLOAD_STATE_UP;
result = ipa_rm_request_resource(IPA_RM_RESOURCE_ODU_ADAPT_PROD);
if (result == -EINPROGRESS) {
if (wait_for_completion_timeout(&ntn_ctx->ntn_completion,
10*HZ) == 0) {
IPA_UC_OFFLOAD_ERR("ODU PROD resource req time out\n");
result = -EFAULT;
goto fail;
}
} else if (result != 0) {
IPA_UC_OFFLOAD_ERR("fail to request resource\n");
result = -EFAULT;
goto fail;
}
return 0;
fail:
ipa_rm_delete_resource(IPA_RM_RESOURCE_ODU_ADAPT_CONS);
fail_create_rm_cons:
ipa_rm_delete_resource(IPA_RM_RESOURCE_ODU_ADAPT_PROD);
return result;
}
int ipa_uc_offload_conn_pipes(struct ipa_uc_offload_conn_in_params *inp,
struct ipa_uc_offload_conn_out_params *outp)
{
int ret = 0;
struct ipa_uc_offload_ctx *offload_ctx;
if (!(inp && outp)) {
IPA_UC_OFFLOAD_ERR("bad parm. in=%p out=%p\n", inp, outp);
return -EINVAL;
}
if (inp->clnt_hndl <= IPA_UC_INVALID ||
inp->clnt_hndl >= IPA_UC_MAX_PROT_SIZE) {
IPA_UC_OFFLOAD_ERR("invalid client handle %d\n",
inp->clnt_hndl);
return -EINVAL;
}
offload_ctx = ipa_uc_offload_ctx[inp->clnt_hndl];
if (!offload_ctx) {
IPA_UC_OFFLOAD_ERR("Invalid Handle\n");
return -EINVAL;
}
if (offload_ctx->state != IPA_UC_OFFLOAD_STATE_INITIALIZED) {
IPA_UC_OFFLOAD_ERR("Invalid state %d\n", offload_ctx->state);
return -EPERM;
}
switch (offload_ctx->proto) {
case IPA_UC_NTN:
ret = ipa_uc_ntn_conn_pipes(&inp->u.ntn, &outp->u.ntn,
offload_ctx);
break;
default:
IPA_UC_OFFLOAD_ERR("Invalid Proto :%d\n", offload_ctx->proto);
ret = -EINVAL;
break;
}
return ret;
}
EXPORT_SYMBOL(ipa_uc_offload_conn_pipes);
int ipa_set_perf_profile(struct ipa_perf_profile *profile)
{
struct ipa_rm_perf_profile rm_profile;
enum ipa_rm_resource_name resource_name;
if (profile == NULL) {
IPA_UC_OFFLOAD_ERR("Invalid input\n");
return -EINVAL;
}
rm_profile.max_supported_bandwidth_mbps =
profile->max_supported_bw_mbps;
if (profile->client == IPA_CLIENT_ODU_PROD) {
resource_name = IPA_RM_RESOURCE_ODU_ADAPT_PROD;
} else if (profile->client == IPA_CLIENT_ODU_TETH_CONS) {
resource_name = IPA_RM_RESOURCE_ODU_ADAPT_CONS;
} else {
IPA_UC_OFFLOAD_ERR("not supported\n");
return -EINVAL;
}
if (ipa_rm_set_perf_profile(resource_name, &rm_profile)) {
IPA_UC_OFFLOAD_ERR("fail to setup rm perf profile\n");
return -EFAULT;
}
return 0;
}
EXPORT_SYMBOL(ipa_set_perf_profile);
static int ipa_uc_ntn_disconn_pipes(struct ipa_uc_offload_ctx *ntn_ctx)
{
int ipa_ep_idx_ul, ipa_ep_idx_dl;
ntn_ctx->state = IPA_UC_OFFLOAD_STATE_DOWN;
if (ipa_rm_delete_dependency(IPA_RM_RESOURCE_ODU_ADAPT_PROD,
IPA_RM_RESOURCE_APPS_CONS)) {
IPA_UC_OFFLOAD_ERR("fail to delete rm dependency\n");
return -EFAULT;
}
if (ipa_rm_delete_resource(IPA_RM_RESOURCE_ODU_ADAPT_PROD)) {
IPA_UC_OFFLOAD_ERR("fail to delete ODU_ADAPT_PROD resource\n");
return -EFAULT;
}
if (ipa_rm_delete_resource(IPA_RM_RESOURCE_ODU_ADAPT_CONS)) {
IPA_UC_OFFLOAD_ERR("fail to delete ODU_ADAPT_CONS resource\n");
return -EFAULT;
}
ipa_ep_idx_ul = ipa_get_ep_mapping(IPA_CLIENT_ODU_PROD);
ipa_ep_idx_dl = ipa_get_ep_mapping(IPA_CLIENT_ODU_TETH_CONS);
if (ipa_tear_down_uc_offload_pipes(ipa_ep_idx_ul, ipa_ep_idx_dl)) {
IPA_UC_OFFLOAD_ERR("fail to tear down uc offload pipes\n");
return -EFAULT;
}
return 0;
}
int ipa_uc_offload_disconn_pipes(u32 clnt_hdl)
{
struct ipa_uc_offload_ctx *offload_ctx;
int ret = 0;
if (clnt_hdl <= IPA_UC_INVALID ||
clnt_hdl >= IPA_UC_MAX_PROT_SIZE) {
IPA_UC_OFFLOAD_ERR("Invalid client handle %d\n", clnt_hdl);
return -EINVAL;
}
offload_ctx = ipa_uc_offload_ctx[clnt_hdl];
if (!offload_ctx) {
IPA_UC_OFFLOAD_ERR("Invalid client Handle\n");
return -EINVAL;
}
if (offload_ctx->state != IPA_UC_OFFLOAD_STATE_UP) {
IPA_UC_OFFLOAD_ERR("Invalid state\n");
return -EINVAL;
}
switch (offload_ctx->proto) {
case IPA_UC_NTN:
ret = ipa_uc_ntn_disconn_pipes(offload_ctx);
break;
default:
IPA_UC_OFFLOAD_ERR("Invalid Proto :%d\n", clnt_hdl);
ret = -EINVAL;
break;
}
return ret;
}
EXPORT_SYMBOL(ipa_uc_offload_disconn_pipes);
static int ipa_uc_ntn_cleanup(struct ipa_uc_offload_ctx *ntn_ctx)
{
int len, result = 0;
struct ipa_ioc_del_hdr *hdr;
len = sizeof(struct ipa_ioc_del_hdr) + 2 * sizeof(struct ipa_hdr_del);
hdr = kzalloc(len, GFP_KERNEL);
if (hdr == NULL) {
IPA_UC_OFFLOAD_ERR("fail to alloc %d bytes\n", len);
return -ENOMEM;
}
hdr->commit = 1;
hdr->num_hdls = 2;
hdr->hdl[0].hdl = ntn_ctx->partial_hdr_hdl[0];
hdr->hdl[1].hdl = ntn_ctx->partial_hdr_hdl[1];
if (ipa_del_hdr(hdr)) {
IPA_UC_OFFLOAD_ERR("fail to delete partial header\n");
result = -EFAULT;
goto fail;
}
if (ipa_deregister_intf(ntn_ctx->netdev_name)) {
IPA_UC_OFFLOAD_ERR("fail to delete interface prop\n");
result = -EFAULT;
goto fail;
}
fail:
kfree(hdr);
return result;
}
int ipa_uc_offload_cleanup(u32 clnt_hdl)
{
struct ipa_uc_offload_ctx *offload_ctx;
int ret = 0;
if (clnt_hdl <= IPA_UC_INVALID ||
clnt_hdl >= IPA_UC_MAX_PROT_SIZE) {
IPA_UC_OFFLOAD_ERR("Invalid client handle %d\n", clnt_hdl);
return -EINVAL;
}
offload_ctx = ipa_uc_offload_ctx[clnt_hdl];
if (!offload_ctx) {
IPA_UC_OFFLOAD_ERR("Invalid client handle %d\n", clnt_hdl);
return -EINVAL;
}
if (offload_ctx->state != IPA_UC_OFFLOAD_STATE_DOWN) {
IPA_UC_OFFLOAD_ERR("Invalid State %d\n", offload_ctx->state);
return -EINVAL;
}
switch (offload_ctx->proto) {
case IPA_UC_NTN:
ret = ipa_uc_ntn_cleanup(offload_ctx);
break;
default:
IPA_UC_OFFLOAD_ERR("Invalid Proto :%d\n", clnt_hdl);
ret = -EINVAL;
break;
}
if (!ret) {
kfree(offload_ctx);
offload_ctx = NULL;
ipa_uc_offload_ctx[clnt_hdl] = NULL;
}
return ret;
}
EXPORT_SYMBOL(ipa_uc_offload_cleanup);

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,383 @@
/* Copyright (c) 2012-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/ipa_mhi.h>
#include <linux/ipa_qmi_service_v01.h>
#ifndef _IPA_COMMON_I_H_
#define _IPA_COMMON_I_H_
#include <linux/ipc_logging.h>
#include <linux/ipa.h>
#include <linux/ipa_uc_offload.h>
#define __FILENAME__ \
(strrchr(__FILE__, '/') ? strrchr(__FILE__, '/') + 1 : __FILE__)
#define IPA_ACTIVE_CLIENTS_PREP_EP(log_info, client) \
log_info.file = __FILENAME__; \
log_info.line = __LINE__; \
log_info.type = EP; \
log_info.id_string = ipa_clients_strings[client]
#define IPA_ACTIVE_CLIENTS_PREP_SIMPLE(log_info) \
log_info.file = __FILENAME__; \
log_info.line = __LINE__; \
log_info.type = SIMPLE; \
log_info.id_string = __func__
#define IPA_ACTIVE_CLIENTS_PREP_RESOURCE(log_info, resource_name) \
log_info.file = __FILENAME__; \
log_info.line = __LINE__; \
log_info.type = RESOURCE; \
log_info.id_string = resource_name
#define IPA_ACTIVE_CLIENTS_PREP_SPECIAL(log_info, id_str) \
log_info.file = __FILENAME__; \
log_info.line = __LINE__; \
log_info.type = SPECIAL; \
log_info.id_string = id_str
#define IPA_ACTIVE_CLIENTS_INC_EP(client) \
do { \
struct ipa_active_client_logging_info log_info; \
IPA_ACTIVE_CLIENTS_PREP_EP(log_info, client); \
ipa_inc_client_enable_clks(&log_info); \
} while (0)
#define IPA_ACTIVE_CLIENTS_DEC_EP(client) \
do { \
struct ipa_active_client_logging_info log_info; \
IPA_ACTIVE_CLIENTS_PREP_EP(log_info, client); \
ipa_dec_client_disable_clks(&log_info); \
} while (0)
#define IPA_ACTIVE_CLIENTS_INC_SIMPLE() \
do { \
struct ipa_active_client_logging_info log_info; \
IPA_ACTIVE_CLIENTS_PREP_SIMPLE(log_info); \
ipa_inc_client_enable_clks(&log_info); \
} while (0)
#define IPA_ACTIVE_CLIENTS_DEC_SIMPLE() \
do { \
struct ipa_active_client_logging_info log_info; \
IPA_ACTIVE_CLIENTS_PREP_SIMPLE(log_info); \
ipa_dec_client_disable_clks(&log_info); \
} while (0)
#define IPA_ACTIVE_CLIENTS_INC_RESOURCE(resource_name) \
do { \
struct ipa_active_client_logging_info log_info; \
IPA_ACTIVE_CLIENTS_PREP_RESOURCE(log_info, resource_name); \
ipa_inc_client_enable_clks(&log_info); \
} while (0)
#define IPA_ACTIVE_CLIENTS_DEC_RESOURCE(resource_name) \
do { \
struct ipa_active_client_logging_info log_info; \
IPA_ACTIVE_CLIENTS_PREP_RESOURCE(log_info, resource_name); \
ipa_dec_client_disable_clks(&log_info); \
} while (0)
#define IPA_ACTIVE_CLIENTS_INC_SPECIAL(id_str) \
do { \
struct ipa_active_client_logging_info log_info; \
IPA_ACTIVE_CLIENTS_PREP_SPECIAL(log_info, id_str); \
ipa_inc_client_enable_clks(&log_info); \
} while (0)
#define IPA_ACTIVE_CLIENTS_DEC_SPECIAL(id_str) \
do { \
struct ipa_active_client_logging_info log_info; \
IPA_ACTIVE_CLIENTS_PREP_SPECIAL(log_info, id_str); \
ipa_dec_client_disable_clks(&log_info); \
} while (0)
#define ipa_assert_on(condition)\
do {\
if (unlikely(condition))\
ipa_assert();\
} while (0)
#define IPA_CLIENT_IS_PROD(x) (x >= IPA_CLIENT_PROD && x < IPA_CLIENT_CONS)
#define IPA_CLIENT_IS_CONS(x) (x >= IPA_CLIENT_CONS && x < IPA_CLIENT_MAX)
#define IPA_GSI_CHANNEL_STOP_SLEEP_MIN_USEC (1000)
#define IPA_GSI_CHANNEL_STOP_SLEEP_MAX_USEC (2000)
enum ipa_active_client_log_type {
EP,
SIMPLE,
RESOURCE,
SPECIAL,
INVALID
};
struct ipa_active_client_logging_info {
const char *id_string;
char *file;
int line;
enum ipa_active_client_log_type type;
};
/**
* struct ipa_mem_buffer - IPA memory buffer
* @base: base
* @phys_base: physical base address
* @size: size of memory buffer
*/
struct ipa_mem_buffer {
void *base;
dma_addr_t phys_base;
u32 size;
};
#define IPA_MHI_GSI_ER_START 10
#define IPA_MHI_GSI_ER_END 16
/**
* enum ipa3_mhi_burst_mode - MHI channel burst mode state
*
* Values are according to MHI specification
* @IPA_MHI_BURST_MODE_DEFAULT: burst mode enabled for HW channels,
* disabled for SW channels
* @IPA_MHI_BURST_MODE_RESERVED:
* @IPA_MHI_BURST_MODE_DISABLE: Burst mode is disabled for this channel
* @IPA_MHI_BURST_MODE_ENABLE: Burst mode is enabled for this channel
*
*/
enum ipa3_mhi_burst_mode {
IPA_MHI_BURST_MODE_DEFAULT,
IPA_MHI_BURST_MODE_RESERVED,
IPA_MHI_BURST_MODE_DISABLE,
IPA_MHI_BURST_MODE_ENABLE,
};
/**
* enum ipa_hw_mhi_channel_states - MHI channel state machine
*
* Values are according to MHI specification
* @IPA_HW_MHI_CHANNEL_STATE_DISABLE: Channel is disabled and not processed by
* the host or device.
* @IPA_HW_MHI_CHANNEL_STATE_ENABLE: A channel is enabled after being
* initialized and configured by host, including its channel context and
* associated transfer ring. While this state, the channel is not active
* and the device does not process transfer.
* @IPA_HW_MHI_CHANNEL_STATE_RUN: The device processes transfers and doorbell
* for channels.
* @IPA_HW_MHI_CHANNEL_STATE_SUSPEND: Used to halt operations on the channel.
* The device does not process transfers for the channel in this state.
* This state is typically used to synchronize the transition to low power
* modes.
* @IPA_HW_MHI_CHANNEL_STATE_STOP: Used to halt operations on the channel.
* The device does not process transfers for the channel in this state.
* @IPA_HW_MHI_CHANNEL_STATE_ERROR: The device detected an error in an element
* from the transfer ring associated with the channel.
* @IPA_HW_MHI_CHANNEL_STATE_INVALID: Invalid state. Shall not be in use in
* operational scenario.
*/
enum ipa_hw_mhi_channel_states {
IPA_HW_MHI_CHANNEL_STATE_DISABLE = 0,
IPA_HW_MHI_CHANNEL_STATE_ENABLE = 1,
IPA_HW_MHI_CHANNEL_STATE_RUN = 2,
IPA_HW_MHI_CHANNEL_STATE_SUSPEND = 3,
IPA_HW_MHI_CHANNEL_STATE_STOP = 4,
IPA_HW_MHI_CHANNEL_STATE_ERROR = 5,
IPA_HW_MHI_CHANNEL_STATE_INVALID = 0xFF
};
/**
* Structure holding the parameters for IPA_CPU_2_HW_CMD_MHI_DL_UL_SYNC_INFO
* command. Parameters are sent as 32b immediate parameters.
* @isDlUlSyncEnabled: Flag to indicate if DL UL Syncronization is enabled
* @UlAccmVal: UL Timer Accumulation value (Period after which device will poll
* for UL data)
* @ulMsiEventThreshold: Threshold at which HW fires MSI to host for UL events
* @dlMsiEventThreshold: Threshold at which HW fires MSI to host for DL events
*/
union IpaHwMhiDlUlSyncCmdData_t {
struct IpaHwMhiDlUlSyncCmdParams_t {
u32 isDlUlSyncEnabled:8;
u32 UlAccmVal:8;
u32 ulMsiEventThreshold:8;
u32 dlMsiEventThreshold:8;
} params;
u32 raw32b;
};
struct ipa_mhi_ch_ctx {
u8 chstate;/*0-7*/
u8 brstmode:2;/*8-9*/
u8 pollcfg:6;/*10-15*/
u16 rsvd;/*16-31*/
u32 chtype;
u32 erindex;
u64 rbase;
u64 rlen;
u64 rp;
u64 wp;
} __packed;
struct ipa_mhi_ev_ctx {
u32 intmodc:16;
u32 intmodt:16;
u32 ertype;
u32 msivec;
u64 rbase;
u64 rlen;
u64 rp;
u64 wp;
} __packed;
struct ipa_mhi_init_uc_engine {
struct ipa_mhi_msi_info *msi;
u32 mmio_addr;
u32 host_ctrl_addr;
u32 host_data_addr;
u32 first_ch_idx;
u32 first_er_idx;
union IpaHwMhiDlUlSyncCmdData_t *ipa_cached_dl_ul_sync_info;
};
struct ipa_mhi_init_gsi_engine {
u32 first_ch_idx;
};
struct ipa_mhi_init_engine {
struct ipa_mhi_init_uc_engine uC;
struct ipa_mhi_init_gsi_engine gsi;
};
struct start_gsi_channel {
enum ipa_hw_mhi_channel_states state;
struct ipa_mhi_msi_info *msi;
struct ipa_mhi_ev_ctx *ev_ctx_host;
u64 event_context_addr;
struct ipa_mhi_ch_ctx *ch_ctx_host;
u64 channel_context_addr;
void (*ch_err_cb)(struct gsi_chan_err_notify *notify);
void (*ev_err_cb)(struct gsi_evt_err_notify *notify);
void *channel;
bool assert_bit40;
struct gsi_mhi_channel_scratch *mhi;
unsigned long *cached_gsi_evt_ring_hdl;
uint8_t evchid;
};
struct start_uc_channel {
enum ipa_hw_mhi_channel_states state;
u8 index;
u8 id;
};
struct start_mhi_channel {
struct start_uc_channel uC;
struct start_gsi_channel gsi;
};
struct ipa_mhi_connect_params_internal {
struct ipa_sys_connect_params *sys;
u8 channel_id;
struct start_mhi_channel start;
};
/**
* struct ipa_hdr_offset_entry - IPA header offset entry
* @link: entry's link in global header offset entries list
* @offset: the offset
* @bin: bin
*/
struct ipa_hdr_offset_entry {
struct list_head link;
u32 offset;
u32 bin;
};
extern const char *ipa_clients_strings[];
#define IPA_IPC_LOGGING(buf, fmt, args...) \
do { \
if (buf) \
ipc_log_string((buf), fmt, __func__, __LINE__, \
## args); \
} while (0)
void ipa_inc_client_enable_clks(struct ipa_active_client_logging_info *id);
void ipa_dec_client_disable_clks(struct ipa_active_client_logging_info *id);
int ipa_inc_client_enable_clks_no_block(
struct ipa_active_client_logging_info *id);
int ipa_suspend_resource_no_block(enum ipa_rm_resource_name resource);
int ipa_resume_resource(enum ipa_rm_resource_name name);
int ipa_suspend_resource_sync(enum ipa_rm_resource_name resource);
int ipa_set_required_perf_profile(enum ipa_voltage_level floor_voltage,
u32 bandwidth_mbps);
void *ipa_get_ipc_logbuf(void);
void *ipa_get_ipc_logbuf_low(void);
void ipa_assert(void);
/* MHI */
int ipa_mhi_init_engine(struct ipa_mhi_init_engine *params);
int ipa_connect_mhi_pipe(struct ipa_mhi_connect_params_internal *in,
u32 *clnt_hdl);
int ipa_disconnect_mhi_pipe(u32 clnt_hdl);
bool ipa_mhi_stop_gsi_channel(enum ipa_client_type client);
int ipa_qmi_enable_force_clear_datapath_send(
struct ipa_enable_force_clear_datapath_req_msg_v01 *req);
int ipa_qmi_disable_force_clear_datapath_send(
struct ipa_disable_force_clear_datapath_req_msg_v01 *req);
int ipa_generate_tag_process(void);
int ipa_disable_sps_pipe(enum ipa_client_type client);
int ipa_mhi_reset_channel_internal(enum ipa_client_type client);
int ipa_mhi_start_channel_internal(enum ipa_client_type client);
bool ipa_mhi_sps_channel_empty(enum ipa_client_type client);
int ipa_mhi_resume_channels_internal(enum ipa_client_type client,
bool LPTransitionRejected, bool brstmode_enabled,
union __packed gsi_channel_scratch ch_scratch, u8 index);
int ipa_mhi_handle_ipa_config_req(struct ipa_config_req_msg_v01 *config_req);
int ipa_mhi_query_ch_info(enum ipa_client_type client,
struct gsi_chan_info *ch_info);
int ipa_mhi_destroy_channel(enum ipa_client_type client);
int ipa_mhi_is_using_dma(bool *flag);
const char *ipa_mhi_get_state_str(int state);
/* MHI uC */
int ipa_uc_mhi_send_dl_ul_sync_info(union IpaHwMhiDlUlSyncCmdData_t *cmd);
int ipa_uc_mhi_init
(void (*ready_cb)(void), void (*wakeup_request_cb)(void));
void ipa_uc_mhi_cleanup(void);
int ipa_uc_mhi_reset_channel(int channelHandle);
int ipa_uc_mhi_suspend_channel(int channelHandle);
int ipa_uc_mhi_stop_event_update_channel(int channelHandle);
int ipa_uc_mhi_print_stats(char *dbg_buff, int size);
/* uC */
int ipa_uc_state_check(void);
/* general */
void ipa_get_holb(int ep_idx, struct ipa_ep_cfg_holb *holb);
void ipa_set_tag_process_before_gating(bool val);
bool ipa_has_open_aggr_frame(enum ipa_client_type client);
int ipa_setup_uc_ntn_pipes(struct ipa_ntn_conn_in_params *in,
ipa_notify_cb notify, void *priv, u8 hdr_len,
struct ipa_ntn_conn_out_params *outp);
int ipa_tear_down_uc_offload_pipes(int ipa_ep_idx_ul, int ipa_ep_idx_dl);
u8 *ipa_write_64(u64 w, u8 *dest);
u8 *ipa_write_32(u32 w, u8 *dest);
u8 *ipa_write_16(u16 hw, u8 *dest);
u8 *ipa_write_8(u8 b, u8 *dest);
u8 *ipa_pad_to_64(u8 *dest);
u8 *ipa_pad_to_32(u8 *dest);
const char *ipa_get_version_string(enum ipa_hw_type ver);
#endif /* _IPA_COMMON_I_H_ */

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,251 @@
/* Copyright (c) 2013-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/slab.h>
#include "ipa_rm_dependency_graph.h"
#include "ipa_rm_i.h"
static int ipa_rm_dep_get_index(enum ipa_rm_resource_name resource_name)
{
int resource_index = IPA_RM_INDEX_INVALID;
if (IPA_RM_RESORCE_IS_PROD(resource_name))
resource_index = ipa_rm_prod_index(resource_name);
else if (IPA_RM_RESORCE_IS_CONS(resource_name))
resource_index = ipa_rm_cons_index(resource_name);
return resource_index;
}
/**
* ipa_rm_dep_graph_create() - creates graph
* @dep_graph: [out] created dependency graph
*
* Returns: dependency graph on success, NULL on failure
*/
int ipa_rm_dep_graph_create(struct ipa_rm_dep_graph **dep_graph)
{
int result = 0;
*dep_graph = kzalloc(sizeof(**dep_graph), GFP_KERNEL);
if (!*dep_graph) {
IPA_RM_ERR("no mem\n");
result = -ENOMEM;
goto bail;
}
bail:
return result;
}
/**
* ipa_rm_dep_graph_delete() - destroyes the graph
* @graph: [in] dependency graph
*
* Frees all resources.
*/
void ipa_rm_dep_graph_delete(struct ipa_rm_dep_graph *graph)
{
int resource_index;
if (!graph) {
IPA_RM_ERR("invalid params\n");
return;
}
for (resource_index = 0;
resource_index < IPA_RM_RESOURCE_MAX;
resource_index++)
kfree(graph->resource_table[resource_index]);
memset(graph->resource_table, 0, sizeof(graph->resource_table));
}
/**
* ipa_rm_dep_graph_get_resource() - provides a resource by name
* @graph: [in] dependency graph
* @name: [in] name of the resource
* @resource: [out] resource in case of success
*
* Returns: 0 on success, negative on failure
*/
int ipa_rm_dep_graph_get_resource(
struct ipa_rm_dep_graph *graph,
enum ipa_rm_resource_name resource_name,
struct ipa_rm_resource **resource)
{
int result;
int resource_index;
if (!graph) {
result = -EINVAL;
goto bail;
}
resource_index = ipa_rm_dep_get_index(resource_name);
if (resource_index == IPA_RM_INDEX_INVALID) {
result = -EINVAL;
goto bail;
}
*resource = graph->resource_table[resource_index];
if (!*resource) {
result = -EINVAL;
goto bail;
}
result = 0;
bail:
return result;
}
/**
* ipa_rm_dep_graph_add() - adds resource to graph
* @graph: [in] dependency graph
* @resource: [in] resource to add
*
* Returns: 0 on success, negative on failure
*/
int ipa_rm_dep_graph_add(struct ipa_rm_dep_graph *graph,
struct ipa_rm_resource *resource)
{
int result = 0;
int resource_index;
if (!graph || !resource) {
result = -EINVAL;
goto bail;
}
resource_index = ipa_rm_dep_get_index(resource->name);
if (resource_index == IPA_RM_INDEX_INVALID) {
result = -EINVAL;
goto bail;
}
graph->resource_table[resource_index] = resource;
bail:
return result;
}
/**
* ipa_rm_dep_graph_remove() - removes resource from graph
* @graph: [in] dependency graph
* @resource: [in] resource to add
*
* Returns: 0 on success, negative on failure
*/
int ipa_rm_dep_graph_remove(struct ipa_rm_dep_graph *graph,
enum ipa_rm_resource_name resource_name)
{
if (!graph)
return -EINVAL;
graph->resource_table[resource_name] = NULL;
return 0;
}
/**
* ipa_rm_dep_graph_add_dependency() - adds dependency between
* two nodes in graph
* @graph: [in] dependency graph
* @resource_name: [in] resource to add
* @depends_on_name: [in] resource to add
* @userspace_dep: [in] operation requested by userspace ?
*
* Returns: 0 on success, negative on failure
*/
int ipa_rm_dep_graph_add_dependency(struct ipa_rm_dep_graph *graph,
enum ipa_rm_resource_name resource_name,
enum ipa_rm_resource_name depends_on_name,
bool userspace_dep)
{
struct ipa_rm_resource *dependent = NULL;
struct ipa_rm_resource *dependency = NULL;
int result;
if (!graph ||
!IPA_RM_RESORCE_IS_PROD(resource_name) ||
!IPA_RM_RESORCE_IS_CONS(depends_on_name)) {
IPA_RM_ERR("invalid params\n");
result = -EINVAL;
goto bail;
}
if (ipa_rm_dep_graph_get_resource(graph,
resource_name,
&dependent)) {
IPA_RM_ERR("%s does not exist\n",
ipa_rm_resource_str(resource_name));
result = -EINVAL;
goto bail;
}
if (ipa_rm_dep_graph_get_resource(graph,
depends_on_name,
&dependency)) {
IPA_RM_ERR("%s does not exist\n",
ipa_rm_resource_str(depends_on_name));
result = -EINVAL;
goto bail;
}
result = ipa_rm_resource_add_dependency(dependent, dependency,
userspace_dep);
bail:
IPA_RM_DBG("EXIT with %d\n", result);
return result;
}
/**
* ipa_rm_dep_graph_delete_dependency() - deleted dependency between
* two nodes in graph
* @graph: [in] dependency graph
* @resource_name: [in] resource to delete
* @depends_on_name: [in] resource to delete
* @userspace_dep: [in] operation requested by userspace ?
*
* Returns: 0 on success, negative on failure
*
*/
int ipa_rm_dep_graph_delete_dependency(struct ipa_rm_dep_graph *graph,
enum ipa_rm_resource_name resource_name,
enum ipa_rm_resource_name depends_on_name,
bool userspace_dep)
{
struct ipa_rm_resource *dependent = NULL;
struct ipa_rm_resource *dependency = NULL;
int result;
if (!graph ||
!IPA_RM_RESORCE_IS_PROD(resource_name) ||
!IPA_RM_RESORCE_IS_CONS(depends_on_name)) {
IPA_RM_ERR("invalid params\n");
result = -EINVAL;
goto bail;
}
if (ipa_rm_dep_graph_get_resource(graph,
resource_name,
&dependent)) {
IPA_RM_ERR("%s does not exist\n",
ipa_rm_resource_str(resource_name));
result = -EINVAL;
goto bail;
}
if (ipa_rm_dep_graph_get_resource(graph,
depends_on_name,
&dependency)) {
IPA_RM_ERR("%s does not exist\n",
ipa_rm_resource_str(depends_on_name));
result = -EINVAL;
goto bail;
}
result = ipa_rm_resource_delete_dependency(dependent, dependency,
userspace_dep);
bail:
IPA_RM_DBG("EXIT with %d\n", result);
return result;
}

View File

@@ -0,0 +1,49 @@
/* Copyright (c) 2013-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _IPA_RM_DEPENDENCY_GRAPH_H_
#define _IPA_RM_DEPENDENCY_GRAPH_H_
#include <linux/list.h>
#include <linux/ipa.h>
#include "ipa_rm_resource.h"
struct ipa_rm_dep_graph {
struct ipa_rm_resource *resource_table[IPA_RM_RESOURCE_MAX];
};
int ipa_rm_dep_graph_get_resource(
struct ipa_rm_dep_graph *graph,
enum ipa_rm_resource_name name,
struct ipa_rm_resource **resource);
int ipa_rm_dep_graph_create(struct ipa_rm_dep_graph **dep_graph);
void ipa_rm_dep_graph_delete(struct ipa_rm_dep_graph *graph);
int ipa_rm_dep_graph_add(struct ipa_rm_dep_graph *graph,
struct ipa_rm_resource *resource);
int ipa_rm_dep_graph_remove(struct ipa_rm_dep_graph *graph,
enum ipa_rm_resource_name resource_name);
int ipa_rm_dep_graph_add_dependency(struct ipa_rm_dep_graph *graph,
enum ipa_rm_resource_name resource_name,
enum ipa_rm_resource_name depends_on_name,
bool userspsace_dep);
int ipa_rm_dep_graph_delete_dependency(struct ipa_rm_dep_graph *graph,
enum ipa_rm_resource_name resource_name,
enum ipa_rm_resource_name depends_on_name,
bool userspsace_dep);
#endif /* _IPA_RM_DEPENDENCY_GRAPH_H_ */

View File

@@ -0,0 +1,157 @@
/* Copyright (c) 2013-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _IPA_RM_I_H_
#define _IPA_RM_I_H_
#include <linux/workqueue.h>
#include <linux/ipa.h>
#include "ipa_rm_resource.h"
#include "ipa_common_i.h"
#define IPA_RM_DRV_NAME "ipa_rm"
#define IPA_RM_DBG_LOW(fmt, args...) \
do { \
pr_debug(IPA_RM_DRV_NAME " %s:%d " fmt, __func__, __LINE__, \
## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
IPA_RM_DRV_NAME " %s:%d " fmt, ## args); \
} while (0)
#define IPA_RM_DBG(fmt, args...) \
do { \
pr_debug(IPA_RM_DRV_NAME " %s:%d " fmt, __func__, __LINE__, \
## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf(), \
IPA_RM_DRV_NAME " %s:%d " fmt, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
IPA_RM_DRV_NAME " %s:%d " fmt, ## args); \
} while (0)
#define IPA_RM_ERR(fmt, args...) \
do { \
pr_err(IPA_RM_DRV_NAME " %s:%d " fmt, __func__, __LINE__, \
## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf(), \
IPA_RM_DRV_NAME " %s:%d " fmt, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
IPA_RM_DRV_NAME " %s:%d " fmt, ## args); \
} while (0)
#define IPA_RM_RESOURCE_CONS_MAX \
(IPA_RM_RESOURCE_MAX - IPA_RM_RESOURCE_PROD_MAX)
#define IPA_RM_RESORCE_IS_PROD(x) \
(x >= IPA_RM_RESOURCE_PROD && x < IPA_RM_RESOURCE_PROD_MAX)
#define IPA_RM_RESORCE_IS_CONS(x) \
(x >= IPA_RM_RESOURCE_PROD_MAX && x < IPA_RM_RESOURCE_MAX)
#define IPA_RM_INDEX_INVALID (-1)
#define IPA_RM_RELEASE_DELAY_IN_MSEC 1000
int ipa_rm_prod_index(enum ipa_rm_resource_name resource_name);
int ipa_rm_cons_index(enum ipa_rm_resource_name resource_name);
/**
* struct ipa_rm_delayed_release_work_type - IPA RM delayed resource release
* work type
* @delayed_work: work struct
* @ipa_rm_resource_name: name of the resource on which this work should be done
* @needed_bw: bandwidth required for resource in Mbps
* @dec_usage_count: decrease usage count on release ?
*/
struct ipa_rm_delayed_release_work_type {
struct delayed_work work;
enum ipa_rm_resource_name resource_name;
u32 needed_bw;
bool dec_usage_count;
};
/**
* enum ipa_rm_wq_cmd - workqueue commands
*/
enum ipa_rm_wq_cmd {
IPA_RM_WQ_NOTIFY_PROD,
IPA_RM_WQ_NOTIFY_CONS,
IPA_RM_WQ_RESOURCE_CB
};
/**
* struct ipa_rm_wq_work_type - IPA RM worqueue specific
* work type
* @work: work struct
* @wq_cmd: command that should be processed in workqueue context
* @resource_name: name of the resource on which this work
* should be done
* @dep_graph: data structure to search for resource if exists
* @event: event to notify
* @notify_registered_only: notify only clients registered by
* ipa_rm_register()
*/
struct ipa_rm_wq_work_type {
struct work_struct work;
enum ipa_rm_wq_cmd wq_cmd;
enum ipa_rm_resource_name resource_name;
enum ipa_rm_event event;
bool notify_registered_only;
};
/**
* struct ipa_rm_wq_suspend_resume_work_type - IPA RM worqueue resume or
* suspend work type
* @work: work struct
* @resource_name: name of the resource on which this work
* should be done
* @prev_state:
* @needed_bw:
*/
struct ipa_rm_wq_suspend_resume_work_type {
struct work_struct work;
enum ipa_rm_resource_name resource_name;
enum ipa_rm_resource_state prev_state;
u32 needed_bw;
};
int ipa_rm_wq_send_cmd(enum ipa_rm_wq_cmd wq_cmd,
enum ipa_rm_resource_name resource_name,
enum ipa_rm_event event,
bool notify_registered_only);
int ipa_rm_wq_send_resume_cmd(enum ipa_rm_resource_name resource_name,
enum ipa_rm_resource_state prev_state,
u32 needed_bw);
int ipa_rm_wq_send_suspend_cmd(enum ipa_rm_resource_name resource_name,
enum ipa_rm_resource_state prev_state,
u32 needed_bw);
int ipa_rm_initialize(void);
int ipa_rm_stat(char *buf, int size);
const char *ipa_rm_resource_str(enum ipa_rm_resource_name resource_name);
void ipa_rm_perf_profile_change(enum ipa_rm_resource_name resource_name);
int ipa_rm_request_resource_with_timer(enum ipa_rm_resource_name resource_name);
void delayed_release_work_func(struct work_struct *work);
int ipa_rm_add_dependency_from_ioctl(enum ipa_rm_resource_name resource_name,
enum ipa_rm_resource_name depends_on_name);
int ipa_rm_delete_dependency_from_ioctl(enum ipa_rm_resource_name resource_name,
enum ipa_rm_resource_name depends_on_name);
void ipa_rm_exit(void);
#endif /* _IPA_RM_I_H_ */

View File

@@ -0,0 +1,273 @@
/* Copyright (c) 2013-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/jiffies.h>
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/timer.h>
#include <linux/unistd.h>
#include <linux/workqueue.h>
#include <linux/ipa.h>
#include "ipa_rm_i.h"
/**
* struct ipa_rm_it_private - IPA RM Inactivity Timer private
* data
* @initied: indicates if instance was initialized
* @lock - spinlock for mutual exclusion
* @resource_name - resource name
* @work: delayed work object for running delayed releas
* function
* @resource_requested: boolean flag indicates if resource was requested
* @reschedule_work: boolean flag indicates to not release and to
* reschedule the release work.
* @work_in_progress: boolean flag indicates is release work was scheduled.
* @jiffies: number of jiffies for timeout
*
* WWAN private - holds all relevant info about WWAN driver
*/
struct ipa_rm_it_private {
bool initied;
enum ipa_rm_resource_name resource_name;
spinlock_t lock;
struct delayed_work work;
bool resource_requested;
bool reschedule_work;
bool work_in_progress;
unsigned long jiffies;
};
static struct ipa_rm_it_private ipa_rm_it_handles[IPA_RM_RESOURCE_MAX];
/**
* ipa_rm_inactivity_timer_func() - called when timer expired in
* the context of the shared workqueue. Checks internally if
* reschedule_work flag is set. In case it is not set this function calls to
* ipa_rm_release_resource(). In case reschedule_work is set this function
* reschedule the work. This flag is cleared cleared when
* calling to ipa_rm_inactivity_timer_release_resource().
*
* @work: work object provided by the work queue
*
* Return codes:
* None
*/
static void ipa_rm_inactivity_timer_func(struct work_struct *work)
{
struct ipa_rm_it_private *me = container_of(to_delayed_work(work),
struct ipa_rm_it_private,
work);
unsigned long flags;
IPA_RM_DBG_LOW("%s: timer expired for resource %d!\n", __func__,
me->resource_name);
spin_lock_irqsave(
&ipa_rm_it_handles[me->resource_name].lock, flags);
if (ipa_rm_it_handles[me->resource_name].reschedule_work) {
IPA_RM_DBG_LOW("%s: setting delayed work\n", __func__);
ipa_rm_it_handles[me->resource_name].reschedule_work = false;
queue_delayed_work(system_unbound_wq,
&ipa_rm_it_handles[me->resource_name].work,
ipa_rm_it_handles[me->resource_name].jiffies);
} else if (ipa_rm_it_handles[me->resource_name].resource_requested) {
IPA_RM_DBG_LOW("%s: not calling release\n", __func__);
ipa_rm_it_handles[me->resource_name].work_in_progress = false;
} else {
IPA_RM_DBG_LOW("%s: calling release_resource on resource %d!\n",
__func__, me->resource_name);
ipa_rm_release_resource(me->resource_name);
ipa_rm_it_handles[me->resource_name].work_in_progress = false;
}
spin_unlock_irqrestore(
&ipa_rm_it_handles[me->resource_name].lock, flags);
}
/**
* ipa_rm_inactivity_timer_init() - Init function for IPA RM
* inactivity timer. This function shall be called prior calling
* any other API of IPA RM inactivity timer.
*
* @resource_name: Resource name. @see ipa_rm.h
* @msecs: time in miliseccond, that IPA RM inactivity timer
* shall wait prior calling to ipa_rm_release_resource().
*
* Return codes:
* 0: success
* -EINVAL: invalid parameters
*/
int ipa_rm_inactivity_timer_init(enum ipa_rm_resource_name resource_name,
unsigned long msecs)
{
IPA_RM_DBG_LOW("%s: resource %d\n", __func__, resource_name);
if (resource_name < 0 ||
resource_name >= IPA_RM_RESOURCE_MAX) {
IPA_RM_ERR("%s: Invalid parameter\n", __func__);
return -EINVAL;
}
if (ipa_rm_it_handles[resource_name].initied) {
IPA_RM_ERR("%s: resource %d already inited\n",
__func__, resource_name);
return -EINVAL;
}
spin_lock_init(&ipa_rm_it_handles[resource_name].lock);
ipa_rm_it_handles[resource_name].resource_name = resource_name;
ipa_rm_it_handles[resource_name].jiffies = msecs_to_jiffies(msecs);
ipa_rm_it_handles[resource_name].resource_requested = false;
ipa_rm_it_handles[resource_name].reschedule_work = false;
ipa_rm_it_handles[resource_name].work_in_progress = false;
INIT_DELAYED_WORK(&ipa_rm_it_handles[resource_name].work,
ipa_rm_inactivity_timer_func);
ipa_rm_it_handles[resource_name].initied = 1;
return 0;
}
EXPORT_SYMBOL(ipa_rm_inactivity_timer_init);
/**
* ipa_rm_inactivity_timer_destroy() - De-Init function for IPA
* RM inactivity timer.
*
* @resource_name: Resource name. @see ipa_rm.h
*
* Return codes:
* 0: success
* -EINVAL: invalid parameters
*/
int ipa_rm_inactivity_timer_destroy(enum ipa_rm_resource_name resource_name)
{
IPA_RM_DBG_LOW("%s: resource %d\n", __func__, resource_name);
if (resource_name < 0 ||
resource_name >= IPA_RM_RESOURCE_MAX) {
IPA_RM_ERR("%s: Invalid parameter\n", __func__);
return -EINVAL;
}
if (!ipa_rm_it_handles[resource_name].initied) {
IPA_RM_ERR("%s: resource %d already inited\n",
__func__, resource_name);
return -EINVAL;
}
cancel_delayed_work_sync(&ipa_rm_it_handles[resource_name].work);
memset(&ipa_rm_it_handles[resource_name], 0,
sizeof(struct ipa_rm_it_private));
return 0;
}
EXPORT_SYMBOL(ipa_rm_inactivity_timer_destroy);
/**
* ipa_rm_inactivity_timer_request_resource() - Same as
* ipa_rm_request_resource(), with a difference that calling to
* this function will also cancel the inactivity timer, if
* ipa_rm_inactivity_timer_release_resource() was called earlier.
*
* @resource_name: Resource name. @see ipa_rm.h
*
* Return codes:
* 0: success
* -EINVAL: invalid parameters
*/
int ipa_rm_inactivity_timer_request_resource(
enum ipa_rm_resource_name resource_name)
{
int ret;
unsigned long flags;
IPA_RM_DBG_LOW("%s: resource %d\n", __func__, resource_name);
if (resource_name < 0 ||
resource_name >= IPA_RM_RESOURCE_MAX) {
IPA_RM_ERR("%s: Invalid parameter\n", __func__);
return -EINVAL;
}
if (!ipa_rm_it_handles[resource_name].initied) {
IPA_RM_ERR("%s: Not initialized\n", __func__);
return -EINVAL;
}
spin_lock_irqsave(&ipa_rm_it_handles[resource_name].lock, flags);
ipa_rm_it_handles[resource_name].resource_requested = true;
spin_unlock_irqrestore(&ipa_rm_it_handles[resource_name].lock, flags);
ret = ipa_rm_request_resource(resource_name);
IPA_RM_DBG_LOW("%s: resource %d: returning %d\n", __func__,
resource_name, ret);
return ret;
}
EXPORT_SYMBOL(ipa_rm_inactivity_timer_request_resource);
/**
* ipa_rm_inactivity_timer_release_resource() - Sets the
* inactivity timer to the timeout set by
* ipa_rm_inactivity_timer_init(). When the timeout expires, IPA
* RM inactivity timer will call to ipa_rm_release_resource().
* If a call to ipa_rm_inactivity_timer_request_resource() was
* made BEFORE the timout has expired, rge timer will be
* cancelled.
*
* @resource_name: Resource name. @see ipa_rm.h
*
* Return codes:
* 0: success
* -EINVAL: invalid parameters
*/
int ipa_rm_inactivity_timer_release_resource(
enum ipa_rm_resource_name resource_name)
{
unsigned long flags;
IPA_RM_DBG_LOW("%s: resource %d\n", __func__, resource_name);
if (resource_name < 0 ||
resource_name >= IPA_RM_RESOURCE_MAX) {
IPA_RM_ERR("%s: Invalid parameter\n", __func__);
return -EINVAL;
}
if (!ipa_rm_it_handles[resource_name].initied) {
IPA_RM_ERR("%s: Not initialized\n", __func__);
return -EINVAL;
}
spin_lock_irqsave(&ipa_rm_it_handles[resource_name].lock, flags);
ipa_rm_it_handles[resource_name].resource_requested = false;
if (ipa_rm_it_handles[resource_name].work_in_progress) {
IPA_RM_DBG_LOW("%s: Timer already set, no sched again %d\n",
__func__, resource_name);
ipa_rm_it_handles[resource_name].reschedule_work = true;
spin_unlock_irqrestore(
&ipa_rm_it_handles[resource_name].lock, flags);
return 0;
}
ipa_rm_it_handles[resource_name].work_in_progress = true;
ipa_rm_it_handles[resource_name].reschedule_work = false;
IPA_RM_DBG_LOW("%s: setting delayed work\n", __func__);
queue_delayed_work(system_unbound_wq,
&ipa_rm_it_handles[resource_name].work,
ipa_rm_it_handles[resource_name].jiffies);
spin_unlock_irqrestore(&ipa_rm_it_handles[resource_name].lock, flags);
return 0;
}
EXPORT_SYMBOL(ipa_rm_inactivity_timer_release_resource);

View File

@@ -0,0 +1,280 @@
/* Copyright (c) 2013-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/slab.h>
#include "ipa_rm_i.h"
/**
* ipa_rm_peers_list_get_resource_index() - resource name to index
* of this resource in corresponding peers list
* @resource_name: [in] resource name
*
* Returns: resource index mapping, IPA_RM_INDEX_INVALID
* in case provided resource name isn't contained in enum
* ipa_rm_resource_name.
*
*/
static int ipa_rm_peers_list_get_resource_index(
enum ipa_rm_resource_name resource_name)
{
int resource_index = IPA_RM_INDEX_INVALID;
if (IPA_RM_RESORCE_IS_PROD(resource_name))
resource_index = ipa_rm_prod_index(resource_name);
else if (IPA_RM_RESORCE_IS_CONS(resource_name)) {
resource_index = ipa_rm_cons_index(resource_name);
if (resource_index != IPA_RM_INDEX_INVALID)
resource_index =
resource_index - IPA_RM_RESOURCE_PROD_MAX;
}
return resource_index;
}
static bool ipa_rm_peers_list_check_index(int index,
struct ipa_rm_peers_list *peers_list)
{
return !(index > peers_list->max_peers || index < 0);
}
/**
* ipa_rm_peers_list_create() - creates the peers list
*
* @max_peers: maximum number of peers in new list
* @peers_list: [out] newly created peers list
*
* Returns: 0 in case of SUCCESS, negative otherwise
*/
int ipa_rm_peers_list_create(int max_peers,
struct ipa_rm_peers_list **peers_list)
{
int result;
*peers_list = kzalloc(sizeof(**peers_list), GFP_ATOMIC);
if (!*peers_list) {
IPA_RM_ERR("no mem\n");
result = -ENOMEM;
goto bail;
}
(*peers_list)->max_peers = max_peers;
(*peers_list)->peers = kzalloc((*peers_list)->max_peers *
sizeof(*((*peers_list)->peers)), GFP_ATOMIC);
if (!((*peers_list)->peers)) {
IPA_RM_ERR("no mem\n");
result = -ENOMEM;
goto list_alloc_fail;
}
return 0;
list_alloc_fail:
kfree(*peers_list);
bail:
return result;
}
/**
* ipa_rm_peers_list_delete() - deletes the peers list
*
* @peers_list: peers list
*
*/
void ipa_rm_peers_list_delete(struct ipa_rm_peers_list *peers_list)
{
if (peers_list) {
kfree(peers_list->peers);
kfree(peers_list);
}
}
/**
* ipa_rm_peers_list_remove_peer() - removes peer from the list
*
* @peers_list: peers list
* @resource_name: name of the resource to remove
*
*/
void ipa_rm_peers_list_remove_peer(
struct ipa_rm_peers_list *peers_list,
enum ipa_rm_resource_name resource_name)
{
if (!peers_list)
return;
peers_list->peers[ipa_rm_peers_list_get_resource_index(
resource_name)].resource = NULL;
peers_list->peers[ipa_rm_peers_list_get_resource_index(
resource_name)].userspace_dep = false;
peers_list->peers_count--;
}
/**
* ipa_rm_peers_list_add_peer() - adds peer to the list
*
* @peers_list: peers list
* @resource: resource to add
*
*/
void ipa_rm_peers_list_add_peer(
struct ipa_rm_peers_list *peers_list,
struct ipa_rm_resource *resource,
bool userspace_dep)
{
if (!peers_list || !resource)
return;
peers_list->peers[ipa_rm_peers_list_get_resource_index(
resource->name)].resource = resource;
peers_list->peers[ipa_rm_peers_list_get_resource_index(
resource->name)].userspace_dep = userspace_dep;
peers_list->peers_count++;
}
/**
* ipa_rm_peers_list_is_empty() - checks
* if resource peers list is empty
*
* @peers_list: peers list
*
* Returns: true if the list is empty, false otherwise
*/
bool ipa_rm_peers_list_is_empty(struct ipa_rm_peers_list *peers_list)
{
bool result = true;
if (!peers_list)
goto bail;
if (peers_list->peers_count > 0)
result = false;
bail:
return result;
}
/**
* ipa_rm_peers_list_has_last_peer() - checks
* if resource peers list has exactly one peer
*
* @peers_list: peers list
*
* Returns: true if the list has exactly one peer, false otherwise
*/
bool ipa_rm_peers_list_has_last_peer(
struct ipa_rm_peers_list *peers_list)
{
bool result = false;
if (!peers_list)
goto bail;
if (peers_list->peers_count == 1)
result = true;
bail:
return result;
}
/**
* ipa_rm_peers_list_check_dependency() - check dependency
* between 2 peer lists
* @resource_peers: first peers list
* @resource_name: first peers list resource name
* @depends_on_peers: second peers list
* @depends_on_name: second peers list resource name
* @userspace_dep: [out] dependency was created by userspace
*
* Returns: true if there is dependency, false otherwise
*
*/
bool ipa_rm_peers_list_check_dependency(
struct ipa_rm_peers_list *resource_peers,
enum ipa_rm_resource_name resource_name,
struct ipa_rm_peers_list *depends_on_peers,
enum ipa_rm_resource_name depends_on_name,
bool *userspace_dep)
{
bool result = false;
int resource_index;
if (!resource_peers || !depends_on_peers || !userspace_dep)
return result;
resource_index = ipa_rm_peers_list_get_resource_index(depends_on_name);
if (resource_peers->peers[resource_index].resource != NULL) {
result = true;
*userspace_dep = resource_peers->peers[resource_index].
userspace_dep;
}
resource_index = ipa_rm_peers_list_get_resource_index(resource_name);
if (depends_on_peers->peers[resource_index].resource != NULL) {
result = true;
*userspace_dep = depends_on_peers->peers[resource_index].
userspace_dep;
}
return result;
}
/**
* ipa_rm_peers_list_get_resource() - get resource by
* resource index
* @resource_index: resource index
* @resource_peers: peers list
*
* Returns: the resource if found, NULL otherwise
*/
struct ipa_rm_resource *ipa_rm_peers_list_get_resource(int resource_index,
struct ipa_rm_peers_list *resource_peers)
{
struct ipa_rm_resource *result = NULL;
if (!ipa_rm_peers_list_check_index(resource_index, resource_peers))
goto bail;
result = resource_peers->peers[resource_index].resource;
bail:
return result;
}
/**
* ipa_rm_peers_list_get_userspace_dep() - returns whether resource dependency
* was added by userspace
* @resource_index: resource index
* @resource_peers: peers list
*
* Returns: true if dependency was added by userspace, false by kernel
*/
bool ipa_rm_peers_list_get_userspace_dep(int resource_index,
struct ipa_rm_peers_list *resource_peers)
{
bool result = false;
if (!ipa_rm_peers_list_check_index(resource_index, resource_peers))
goto bail;
result = resource_peers->peers[resource_index].userspace_dep;
bail:
return result;
}
/**
* ipa_rm_peers_list_get_size() - get peers list sise
*
* @peers_list: peers list
*
* Returns: the size of the peers list
*/
int ipa_rm_peers_list_get_size(struct ipa_rm_peers_list *peers_list)
{
return peers_list->max_peers;
}

View File

@@ -0,0 +1,62 @@
/* Copyright (c) 2013-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _IPA_RM_PEERS_LIST_H_
#define _IPA_RM_PEERS_LIST_H_
#include "ipa_rm_resource.h"
struct ipa_rm_resource_peer {
struct ipa_rm_resource *resource;
bool userspace_dep;
};
/**
* struct ipa_rm_peers_list - IPA RM resource peers list
* @peers: the list of references to resources dependent on this resource
* in case of producer or list of dependencies in case of consumer
* @max_peers: maximum number of peers for this resource
* @peers_count: actual number of peers for this resource
*/
struct ipa_rm_peers_list {
struct ipa_rm_resource_peer *peers;
int max_peers;
int peers_count;
};
int ipa_rm_peers_list_create(int max_peers,
struct ipa_rm_peers_list **peers_list);
void ipa_rm_peers_list_delete(struct ipa_rm_peers_list *peers_list);
void ipa_rm_peers_list_remove_peer(
struct ipa_rm_peers_list *peers_list,
enum ipa_rm_resource_name resource_name);
void ipa_rm_peers_list_add_peer(
struct ipa_rm_peers_list *peers_list,
struct ipa_rm_resource *resource,
bool userspace_dep);
bool ipa_rm_peers_list_check_dependency(
struct ipa_rm_peers_list *resource_peers,
enum ipa_rm_resource_name resource_name,
struct ipa_rm_peers_list *depends_on_peers,
enum ipa_rm_resource_name depends_on_name,
bool *userspace_dep);
struct ipa_rm_resource *ipa_rm_peers_list_get_resource(int resource_index,
struct ipa_rm_peers_list *peers_list);
bool ipa_rm_peers_list_get_userspace_dep(int resource_index,
struct ipa_rm_peers_list *resource_peers);
int ipa_rm_peers_list_get_size(struct ipa_rm_peers_list *peers_list);
bool ipa_rm_peers_list_is_empty(struct ipa_rm_peers_list *peers_list);
bool ipa_rm_peers_list_has_last_peer(
struct ipa_rm_peers_list *peers_list);
#endif /* _IPA_RM_PEERS_LIST_H_ */

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,165 @@
/* Copyright (c) 2013-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _IPA_RM_RESOURCE_H_
#define _IPA_RM_RESOURCE_H_
#include <linux/list.h>
#include <linux/ipa.h>
#include "ipa_rm_peers_list.h"
/**
* enum ipa_rm_resource_state - resource state
*/
enum ipa_rm_resource_state {
IPA_RM_RELEASED,
IPA_RM_REQUEST_IN_PROGRESS,
IPA_RM_GRANTED,
IPA_RM_RELEASE_IN_PROGRESS
};
/**
* enum ipa_rm_resource_type - IPA resource manager resource type
*/
enum ipa_rm_resource_type {
IPA_RM_PRODUCER,
IPA_RM_CONSUMER
};
/**
* struct ipa_rm_notification_info - notification information
* of IPA RM client
* @reg_params: registration parameters
* @explicit: registered explicitly by ipa_rm_register()
* @link: link to the list of all registered clients information
*/
struct ipa_rm_notification_info {
struct ipa_rm_register_params reg_params;
bool explicit;
struct list_head link;
};
/**
* struct ipa_rm_resource - IPA RM resource
* @name: name identifying resource
* @type: type of resource (PRODUCER or CONSUMER)
* @floor_voltage: minimum voltage level for operation
* @max_bw: maximum bandwidth required for resource in Mbps
* @state: state of the resource
* @peers_list: list of the peers of the resource
*/
struct ipa_rm_resource {
enum ipa_rm_resource_name name;
enum ipa_rm_resource_type type;
enum ipa_voltage_level floor_voltage;
u32 max_bw;
u32 needed_bw;
enum ipa_rm_resource_state state;
struct ipa_rm_peers_list *peers_list;
};
/**
* struct ipa_rm_resource_cons - IPA RM consumer
* @resource: resource
* @usage_count: number of producers in GRANTED / REQUESTED state
* using this consumer
* @request_consumer_in_progress: when set, the consumer is during its request
* phase
* @request_resource: function which should be called to request resource
* from resource manager
* @release_resource: function which should be called to release resource
* from resource manager
* Add new fields after @resource only.
*/
struct ipa_rm_resource_cons {
struct ipa_rm_resource resource;
int usage_count;
struct completion request_consumer_in_progress;
int (*request_resource)(void);
int (*release_resource)(void);
};
/**
* struct ipa_rm_resource_prod - IPA RM producer
* @resource: resource
* @event_listeners: clients registered with this producer
* for notifications in resource state
* list Add new fields after @resource only.
*/
struct ipa_rm_resource_prod {
struct ipa_rm_resource resource;
struct list_head event_listeners;
int pending_request;
int pending_release;
};
int ipa_rm_resource_create(
struct ipa_rm_create_params *create_params,
struct ipa_rm_resource **resource);
int ipa_rm_resource_delete(struct ipa_rm_resource *resource);
int ipa_rm_resource_producer_register(struct ipa_rm_resource_prod *producer,
struct ipa_rm_register_params *reg_params,
bool explicit);
int ipa_rm_resource_producer_deregister(struct ipa_rm_resource_prod *producer,
struct ipa_rm_register_params *reg_params);
int ipa_rm_resource_add_dependency(struct ipa_rm_resource *resource,
struct ipa_rm_resource *depends_on,
bool userspace_dep);
int ipa_rm_resource_delete_dependency(struct ipa_rm_resource *resource,
struct ipa_rm_resource *depends_on,
bool userspace_dep);
int ipa_rm_resource_producer_request(struct ipa_rm_resource_prod *producer);
int ipa_rm_resource_producer_release(struct ipa_rm_resource_prod *producer);
int ipa_rm_resource_consumer_request(struct ipa_rm_resource_cons *consumer,
u32 needed_bw,
bool inc_usage_count,
bool wake_client);
int ipa_rm_resource_consumer_release(struct ipa_rm_resource_cons *consumer,
u32 needed_bw,
bool dec_usage_count);
int ipa_rm_resource_set_perf_profile(struct ipa_rm_resource *resource,
struct ipa_rm_perf_profile *profile);
void ipa_rm_resource_consumer_handle_cb(struct ipa_rm_resource_cons *consumer,
enum ipa_rm_event event);
void ipa_rm_resource_producer_notify_clients(
struct ipa_rm_resource_prod *producer,
enum ipa_rm_event event,
bool notify_registered_only);
int ipa_rm_resource_producer_print_stat(
struct ipa_rm_resource *resource,
char *buf,
int size);
int ipa_rm_resource_consumer_request_work(struct ipa_rm_resource_cons *consumer,
enum ipa_rm_resource_state prev_state,
u32 needed_bw,
bool notify_completion);
int ipa_rm_resource_consumer_release_work(
struct ipa_rm_resource_cons *consumer,
enum ipa_rm_resource_state prev_state,
bool notify_completion);
#endif /* _IPA_RM_RESOURCE_H_ */

View File

@@ -0,0 +1,24 @@
/* Copyright (c) 2012-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/ipa_mhi.h>
#include <linux/ipa_qmi_service_v01.h>
#ifndef _IPA_UC_OFFLOAD_COMMON_I_H_
#define _IPA_UC_OFFLOAD_COMMON_I_H_
int ipa_setup_uc_ntn_pipes(struct ipa_ntn_conn_in_params *in,
ipa_notify_cb notify, void *priv, u8 hdr_len,
struct ipa_ntn_conn_out_params *outp);
int ipa_tear_down_uc_offload_pipes(int ipa_ep_idx_ul, int ipa_ep_idx_dl);
#endif /* _IPA_UC_OFFLOAD_COMMON_I_H_ */

View File

@@ -0,0 +1,6 @@
obj-$(CONFIG_IPA) += ipat.o
ipat-y := ipa.o ipa_debugfs.o ipa_hdr.o ipa_flt.o ipa_rt.o ipa_dp.o ipa_client.o \
ipa_utils.o ipa_nat.o ipa_intf.o teth_bridge.o ipa_interrupts.o \
ipa_uc.o ipa_uc_wdi.o ipa_dma.o ipa_uc_mhi.o ipa_mhi.o ipa_uc_ntn.o
obj-$(CONFIG_RMNET_IPA) += rmnet_ipa.o ipa_qmi_service_v01.o ipa_qmi_service.o rmnet_ipa_fd_ioctl.o

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,897 @@
/* Copyright (c) 2012-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <asm/barrier.h>
#include <linux/delay.h>
#include <linux/device.h>
#include "ipa_i.h"
/*
* These values were determined empirically and shows good E2E bi-
* directional throughputs
*/
#define IPA_HOLB_TMR_EN 0x1
#define IPA_HOLB_TMR_DIS 0x0
#define IPA_HOLB_TMR_DEFAULT_VAL 0x1ff
#define IPA_PKT_FLUSH_TO_US 100
int ipa_enable_data_path(u32 clnt_hdl)
{
struct ipa_ep_context *ep = &ipa_ctx->ep[clnt_hdl];
struct ipa_ep_cfg_holb holb_cfg;
struct ipa_ep_cfg_ctrl ep_cfg_ctrl;
int res = 0;
IPADBG("Enabling data path\n");
/* From IPA 2.0, disable HOLB */
if ((ipa_ctx->ipa_hw_type >= IPA_HW_v2_0) &&
IPA_CLIENT_IS_CONS(ep->client)) {
memset(&holb_cfg, 0, sizeof(holb_cfg));
holb_cfg.en = IPA_HOLB_TMR_DIS;
holb_cfg.tmr_val = 0;
res = ipa2_cfg_ep_holb(clnt_hdl, &holb_cfg);
}
/* Enable the pipe */
if (IPA_CLIENT_IS_CONS(ep->client) &&
(ep->keep_ipa_awake ||
ipa_ctx->resume_on_connect[ep->client] ||
!ipa_should_pipe_be_suspended(ep->client))) {
memset(&ep_cfg_ctrl, 0, sizeof(ep_cfg_ctrl));
ep_cfg_ctrl.ipa_ep_suspend = false;
ipa2_cfg_ep_ctrl(clnt_hdl, &ep_cfg_ctrl);
}
return res;
}
int ipa_disable_data_path(u32 clnt_hdl)
{
struct ipa_ep_context *ep = &ipa_ctx->ep[clnt_hdl];
struct ipa_ep_cfg_holb holb_cfg;
struct ipa_ep_cfg_ctrl ep_cfg_ctrl;
u32 aggr_init;
int res = 0;
IPADBG("Disabling data path\n");
/* On IPA 2.0, enable HOLB in order to prevent IPA from stalling */
if ((ipa_ctx->ipa_hw_type >= IPA_HW_v2_0) &&
IPA_CLIENT_IS_CONS(ep->client)) {
memset(&holb_cfg, 0, sizeof(holb_cfg));
holb_cfg.en = IPA_HOLB_TMR_EN;
holb_cfg.tmr_val = 0;
res = ipa2_cfg_ep_holb(clnt_hdl, &holb_cfg);
}
/* Suspend the pipe */
if (IPA_CLIENT_IS_CONS(ep->client)) {
memset(&ep_cfg_ctrl, 0, sizeof(struct ipa_ep_cfg_ctrl));
ep_cfg_ctrl.ipa_ep_suspend = true;
ipa2_cfg_ep_ctrl(clnt_hdl, &ep_cfg_ctrl);
}
udelay(IPA_PKT_FLUSH_TO_US);
aggr_init = ipa_read_reg(ipa_ctx->mmio,
IPA_ENDP_INIT_AGGR_N_OFST_v2_0(clnt_hdl));
if (((aggr_init & IPA_ENDP_INIT_AGGR_N_AGGR_EN_BMSK) >>
IPA_ENDP_INIT_AGGR_N_AGGR_EN_SHFT) == IPA_ENABLE_AGGR) {
res = ipa_tag_aggr_force_close(clnt_hdl);
if (res) {
IPAERR("tag process timeout, client:%d err:%d\n",
clnt_hdl, res);
BUG();
}
}
return res;
}
static int ipa2_smmu_map_peer_bam(unsigned long dev)
{
phys_addr_t base;
u32 size;
struct iommu_domain *smmu_domain;
struct ipa_smmu_cb_ctx *cb = ipa2_get_smmu_ctx();
if (!ipa_ctx->smmu_s1_bypass) {
if (ipa_ctx->peer_bam_map_cnt == 0) {
if (sps_get_bam_addr(dev, &base, &size)) {
IPAERR("Fail to get addr\n");
return -EINVAL;
}
smmu_domain = ipa2_get_smmu_domain();
if (smmu_domain != NULL) {
if (ipa_iommu_map(smmu_domain,
cb->va_end,
rounddown(base, PAGE_SIZE),
roundup(size + base -
rounddown(base, PAGE_SIZE), PAGE_SIZE),
IOMMU_READ | IOMMU_WRITE |
IOMMU_DEVICE)) {
IPAERR("Fail to ipa_iommu_map\n");
return -EINVAL;
}
}
ipa_ctx->peer_bam_iova = cb->va_end;
ipa_ctx->peer_bam_pa = base;
ipa_ctx->peer_bam_map_size = size;
ipa_ctx->peer_bam_dev = dev;
IPADBG("Peer bam %lu mapped\n", dev);
} else {
WARN_ON(dev != ipa_ctx->peer_bam_dev);
}
ipa_ctx->peer_bam_map_cnt++;
}
return 0;
}
static int ipa_connect_configure_sps(const struct ipa_connect_params *in,
struct ipa_ep_context *ep, int ipa_ep_idx)
{
int result = -EFAULT;
/* Default Config */
ep->ep_hdl = sps_alloc_endpoint();
if (ipa2_smmu_map_peer_bam(in->client_bam_hdl)) {
IPAERR("fail to iommu map peer BAM.\n");
return -EFAULT;
}
if (ep->ep_hdl == NULL) {
IPAERR("SPS EP alloc failed EP.\n");
return -EFAULT;
}
result = sps_get_config(ep->ep_hdl,
&ep->connect);
if (result) {
IPAERR("fail to get config.\n");
return -EFAULT;
}
/* Specific Config */
if (IPA_CLIENT_IS_CONS(in->client)) {
ep->connect.mode = SPS_MODE_SRC;
ep->connect.destination =
in->client_bam_hdl;
ep->connect.dest_iova = ipa_ctx->peer_bam_iova;
ep->connect.source = ipa_ctx->bam_handle;
ep->connect.dest_pipe_index =
in->client_ep_idx;
ep->connect.src_pipe_index = ipa_ep_idx;
} else {
ep->connect.mode = SPS_MODE_DEST;
ep->connect.source = in->client_bam_hdl;
ep->connect.source_iova = ipa_ctx->peer_bam_iova;
ep->connect.destination = ipa_ctx->bam_handle;
ep->connect.src_pipe_index = in->client_ep_idx;
ep->connect.dest_pipe_index = ipa_ep_idx;
}
return 0;
}
static int ipa_connect_allocate_fifo(const struct ipa_connect_params *in,
struct sps_mem_buffer *mem_buff_ptr,
bool *fifo_in_pipe_mem_ptr,
u32 *fifo_pipe_mem_ofst_ptr,
u32 fifo_size, int ipa_ep_idx)
{
dma_addr_t dma_addr;
u32 ofst;
int result = -EFAULT;
struct iommu_domain *smmu_domain;
mem_buff_ptr->size = fifo_size;
if (in->pipe_mem_preferred) {
if (ipa_pipe_mem_alloc(&ofst, fifo_size)) {
IPAERR("FIFO pipe mem alloc fail ep %u\n",
ipa_ep_idx);
mem_buff_ptr->base =
dma_alloc_coherent(ipa_ctx->pdev,
mem_buff_ptr->size,
&dma_addr, GFP_KERNEL);
} else {
memset(mem_buff_ptr, 0, sizeof(struct sps_mem_buffer));
result = sps_setup_bam2bam_fifo(mem_buff_ptr, ofst,
fifo_size, 1);
WARN_ON(result);
*fifo_in_pipe_mem_ptr = 1;
dma_addr = mem_buff_ptr->phys_base;
*fifo_pipe_mem_ofst_ptr = ofst;
}
} else {
mem_buff_ptr->base =
dma_alloc_coherent(ipa_ctx->pdev, mem_buff_ptr->size,
&dma_addr, GFP_KERNEL);
}
if (ipa_ctx->smmu_s1_bypass) {
mem_buff_ptr->phys_base = dma_addr;
} else {
mem_buff_ptr->iova = dma_addr;
smmu_domain = ipa2_get_smmu_domain();
if (smmu_domain != NULL) {
mem_buff_ptr->phys_base =
iommu_iova_to_phys(smmu_domain, dma_addr);
}
}
if (mem_buff_ptr->base == NULL) {
IPAERR("fail to get DMA memory.\n");
return -EFAULT;
}
return 0;
}
/**
* ipa2_connect() - low-level IPA client connect
* @in: [in] input parameters from client
* @sps: [out] sps output from IPA needed by client for sps_connect
* @clnt_hdl: [out] opaque client handle assigned by IPA to client
*
* Should be called by the driver of the peripheral that wants to connect to
* IPA in BAM-BAM mode. these peripherals are USB and HSIC. this api
* expects caller to take responsibility to add any needed headers, routing
* and filtering tables and rules as needed.
*
* Returns: 0 on success, negative on failure
*
* Note: Should not be called from atomic context
*/
int ipa2_connect(const struct ipa_connect_params *in,
struct ipa_sps_params *sps, u32 *clnt_hdl)
{
int ipa_ep_idx;
int result = -EFAULT;
struct ipa_ep_context *ep;
struct ipa_ep_cfg_status ep_status;
unsigned long base;
struct iommu_domain *smmu_domain;
if (unlikely(!ipa_ctx)) {
IPAERR("IPA driver was not initialized\n");
return -EINVAL;
}
IPADBG("connecting client\n");
if (in == NULL || sps == NULL || clnt_hdl == NULL ||
in->client >= IPA_CLIENT_MAX ||
in->desc_fifo_sz == 0 || in->data_fifo_sz == 0) {
IPAERR("bad parm.\n");
return -EINVAL;
}
ipa_ep_idx = ipa2_get_ep_mapping(in->client);
if (ipa_ep_idx == -1) {
IPAERR("fail to alloc EP.\n");
goto fail;
}
ep = &ipa_ctx->ep[ipa_ep_idx];
if (ep->valid) {
IPAERR("EP already allocated.\n");
goto fail;
}
memset(&ipa_ctx->ep[ipa_ep_idx], 0, sizeof(struct ipa_ep_context));
IPA_ACTIVE_CLIENTS_INC_EP(in->client);
ep->skip_ep_cfg = in->skip_ep_cfg;
ep->valid = 1;
ep->client = in->client;
ep->client_notify = in->notify;
ep->priv = in->priv;
ep->keep_ipa_awake = in->keep_ipa_awake;
/* Notify uc to start monitoring holb on USB BAM Producer pipe. */
if (IPA_CLIENT_IS_USB_CONS(in->client)) {
ipa_uc_monitor_holb(in->client, true);
IPADBG("Enabling holb monitor for client:%d", in->client);
}
result = ipa_enable_data_path(ipa_ep_idx);
if (result) {
IPAERR("enable data path failed res=%d clnt=%d.\n", result,
ipa_ep_idx);
goto ipa_cfg_ep_fail;
}
if (!ep->skip_ep_cfg) {
if (ipa2_cfg_ep(ipa_ep_idx, &in->ipa_ep_cfg)) {
IPAERR("fail to configure EP.\n");
goto ipa_cfg_ep_fail;
}
/* Setting EP status 0 */
memset(&ep_status, 0, sizeof(ep_status));
if (ipa2_cfg_ep_status(ipa_ep_idx, &ep_status)) {
IPAERR("fail to configure status of EP.\n");
goto ipa_cfg_ep_fail;
}
IPADBG("ep configuration successful\n");
} else {
IPADBG("Skipping endpoint configuration.\n");
}
result = ipa_connect_configure_sps(in, ep, ipa_ep_idx);
if (result) {
IPAERR("fail to configure SPS.\n");
goto ipa_cfg_ep_fail;
}
if (!ipa_ctx->smmu_s1_bypass &&
(in->desc.base == NULL ||
in->data.base == NULL)) {
IPAERR(" allocate FIFOs data_fifo=0x%p desc_fifo=0x%p.\n",
in->data.base, in->desc.base);
goto desc_mem_alloc_fail;
}
if (in->desc.base == NULL) {
result = ipa_connect_allocate_fifo(in, &ep->connect.desc,
&ep->desc_fifo_in_pipe_mem,
&ep->desc_fifo_pipe_mem_ofst,
in->desc_fifo_sz, ipa_ep_idx);
if (result) {
IPAERR("fail to allocate DESC FIFO.\n");
goto desc_mem_alloc_fail;
}
} else {
IPADBG("client allocated DESC FIFO\n");
ep->connect.desc = in->desc;
ep->desc_fifo_client_allocated = 1;
}
IPADBG("Descriptor FIFO pa=%pa, size=%d\n", &ep->connect.desc.phys_base,
ep->connect.desc.size);
if (in->data.base == NULL) {
result = ipa_connect_allocate_fifo(in, &ep->connect.data,
&ep->data_fifo_in_pipe_mem,
&ep->data_fifo_pipe_mem_ofst,
in->data_fifo_sz, ipa_ep_idx);
if (result) {
IPAERR("fail to allocate DATA FIFO.\n");
goto data_mem_alloc_fail;
}
} else {
IPADBG("client allocated DATA FIFO\n");
ep->connect.data = in->data;
ep->data_fifo_client_allocated = 1;
}
IPADBG("Data FIFO pa=%pa, size=%d\n", &ep->connect.data.phys_base,
ep->connect.data.size);
if (!ipa_ctx->smmu_s1_bypass) {
ep->connect.data.iova = ep->connect.data.phys_base;
base = ep->connect.data.iova;
smmu_domain = ipa2_get_smmu_domain();
if (smmu_domain != NULL) {
if (ipa_iommu_map(smmu_domain,
rounddown(base, PAGE_SIZE),
rounddown(base, PAGE_SIZE),
roundup(ep->connect.data.size + base -
rounddown(base, PAGE_SIZE), PAGE_SIZE),
IOMMU_READ | IOMMU_WRITE)) {
IPAERR("Fail to ipa_iommu_map data FIFO\n");
goto iommu_map_data_fail;
}
}
ep->connect.desc.iova = ep->connect.desc.phys_base;
base = ep->connect.desc.iova;
if (smmu_domain != NULL) {
if (ipa_iommu_map(smmu_domain,
rounddown(base, PAGE_SIZE),
rounddown(base, PAGE_SIZE),
roundup(ep->connect.desc.size + base -
rounddown(base, PAGE_SIZE), PAGE_SIZE),
IOMMU_READ | IOMMU_WRITE)) {
IPAERR("Fail to ipa_iommu_map desc FIFO\n");
goto iommu_map_desc_fail;
}
}
}
if ((ipa_ctx->ipa_hw_type == IPA_HW_v2_0 ||
ipa_ctx->ipa_hw_type == IPA_HW_v2_5 ||
ipa_ctx->ipa_hw_type == IPA_HW_v2_6L) &&
IPA_CLIENT_IS_USB_CONS(in->client))
ep->connect.event_thresh = IPA_USB_EVENT_THRESHOLD;
else
ep->connect.event_thresh = IPA_EVENT_THRESHOLD;
ep->connect.options = SPS_O_AUTO_ENABLE; /* BAM-to-BAM */
result = ipa_sps_connect_safe(ep->ep_hdl, &ep->connect, in->client);
if (result) {
IPAERR("sps_connect fails.\n");
goto sps_connect_fail;
}
sps->ipa_bam_hdl = ipa_ctx->bam_handle;
sps->ipa_ep_idx = ipa_ep_idx;
*clnt_hdl = ipa_ep_idx;
memcpy(&sps->desc, &ep->connect.desc, sizeof(struct sps_mem_buffer));
memcpy(&sps->data, &ep->connect.data, sizeof(struct sps_mem_buffer));
ipa_ctx->skip_ep_cfg_shadow[ipa_ep_idx] = ep->skip_ep_cfg;
if (!ep->skip_ep_cfg && IPA_CLIENT_IS_PROD(in->client))
ipa_install_dflt_flt_rules(ipa_ep_idx);
if (!ep->keep_ipa_awake)
IPA_ACTIVE_CLIENTS_DEC_EP(in->client);
IPADBG("client %d (ep: %d) connected\n", in->client, ipa_ep_idx);
return 0;
sps_connect_fail:
if (!ipa_ctx->smmu_s1_bypass) {
base = ep->connect.desc.iova;
smmu_domain = ipa2_get_smmu_domain();
if (smmu_domain != NULL) {
iommu_unmap(smmu_domain,
rounddown(base, PAGE_SIZE),
roundup(ep->connect.desc.size + base -
rounddown(base, PAGE_SIZE), PAGE_SIZE));
}
}
iommu_map_desc_fail:
if (!ipa_ctx->smmu_s1_bypass) {
base = ep->connect.data.iova;
smmu_domain = ipa2_get_smmu_domain();
if (smmu_domain != NULL) {
iommu_unmap(smmu_domain,
rounddown(base, PAGE_SIZE),
roundup(ep->connect.data.size + base -
rounddown(base, PAGE_SIZE), PAGE_SIZE));
}
}
iommu_map_data_fail:
if (!ep->data_fifo_client_allocated) {
if (!ep->data_fifo_in_pipe_mem)
dma_free_coherent(ipa_ctx->pdev,
ep->connect.data.size,
ep->connect.data.base,
ep->connect.data.phys_base);
else
ipa_pipe_mem_free(ep->data_fifo_pipe_mem_ofst,
ep->connect.data.size);
}
data_mem_alloc_fail:
if (!ep->desc_fifo_client_allocated) {
if (!ep->desc_fifo_in_pipe_mem)
dma_free_coherent(ipa_ctx->pdev,
ep->connect.desc.size,
ep->connect.desc.base,
ep->connect.desc.phys_base);
else
ipa_pipe_mem_free(ep->desc_fifo_pipe_mem_ofst,
ep->connect.desc.size);
}
desc_mem_alloc_fail:
sps_free_endpoint(ep->ep_hdl);
ipa_cfg_ep_fail:
memset(&ipa_ctx->ep[ipa_ep_idx], 0, sizeof(struct ipa_ep_context));
IPA_ACTIVE_CLIENTS_DEC_EP(in->client);
fail:
return result;
}
static int ipa2_smmu_unmap_peer_bam(unsigned long dev)
{
size_t len;
struct iommu_domain *smmu_domain;
struct ipa_smmu_cb_ctx *cb = ipa2_get_smmu_ctx();
if (!ipa_ctx->smmu_s1_bypass) {
WARN_ON(dev != ipa_ctx->peer_bam_dev);
ipa_ctx->peer_bam_map_cnt--;
if (ipa_ctx->peer_bam_map_cnt == 0) {
len = roundup(ipa_ctx->peer_bam_map_size +
ipa_ctx->peer_bam_pa -
rounddown(ipa_ctx->peer_bam_pa,
PAGE_SIZE), PAGE_SIZE);
smmu_domain = ipa2_get_smmu_domain();
if (smmu_domain != NULL) {
if (iommu_unmap(smmu_domain,
cb->va_end, len) != len) {
IPAERR("Fail to iommu_unmap\n");
return -EINVAL;
}
IPADBG("Peer bam %lu unmapped\n", dev);
}
}
}
return 0;
}
/**
* ipa2_disconnect() - low-level IPA client disconnect
* @clnt_hdl: [in] opaque client handle assigned by IPA to client
*
* Should be called by the driver of the peripheral that wants to disconnect
* from IPA in BAM-BAM mode. this api expects caller to take responsibility to
* free any needed headers, routing and filtering tables and rules as needed.
*
* Returns: 0 on success, negative on failure
*
* Note: Should not be called from atomic context
*/
int ipa2_disconnect(u32 clnt_hdl)
{
int result;
struct ipa_ep_context *ep;
unsigned long peer_bam;
unsigned long base;
struct iommu_domain *smmu_domain;
struct ipa_disable_force_clear_datapath_req_msg_v01 req = {0};
int res;
enum ipa_client_type client_type;
if (unlikely(!ipa_ctx)) {
IPAERR("IPA driver was not initialized\n");
return -EINVAL;
}
if (clnt_hdl >= ipa_ctx->ipa_num_pipes ||
ipa_ctx->ep[clnt_hdl].valid == 0) {
IPAERR("bad parm.\n");
return -EINVAL;
}
ep = &ipa_ctx->ep[clnt_hdl];
client_type = ipa2_get_client_mapping(clnt_hdl);
if (!ep->keep_ipa_awake)
IPA_ACTIVE_CLIENTS_INC_EP(client_type);
/* For USB 2.0 controller, first the ep will be disabled.
* so this sequence is not needed again when disconnecting the pipe.
*/
if (!ep->ep_disabled) {
/* Set Disconnect in Progress flag. */
spin_lock(&ipa_ctx->disconnect_lock);
ep->disconnect_in_progress = true;
spin_unlock(&ipa_ctx->disconnect_lock);
/* Notify uc to stop monitoring holb on USB BAM
* Producer pipe.
*/
if (IPA_CLIENT_IS_USB_CONS(ep->client)) {
ipa_uc_monitor_holb(ep->client, false);
IPADBG("Disabling holb monitor for client: %d\n",
ep->client);
}
result = ipa_disable_data_path(clnt_hdl);
if (result) {
IPAERR("disable data path failed res=%d clnt=%d.\n",
result, clnt_hdl);
return -EPERM;
}
}
result = sps_disconnect(ep->ep_hdl);
if (result) {
IPAERR("SPS disconnect failed.\n");
return -EPERM;
}
if (IPA_CLIENT_IS_CONS(ep->client))
peer_bam = ep->connect.destination;
else
peer_bam = ep->connect.source;
if (ipa2_smmu_unmap_peer_bam(peer_bam)) {
IPAERR("fail to iommu unmap peer BAM.\n");
return -EPERM;
}
if (!ep->desc_fifo_client_allocated &&
ep->connect.desc.base) {
if (!ep->desc_fifo_in_pipe_mem)
dma_free_coherent(ipa_ctx->pdev,
ep->connect.desc.size,
ep->connect.desc.base,
ep->connect.desc.phys_base);
else
ipa_pipe_mem_free(ep->desc_fifo_pipe_mem_ofst,
ep->connect.desc.size);
}
if (!ep->data_fifo_client_allocated &&
ep->connect.data.base) {
if (!ep->data_fifo_in_pipe_mem)
dma_free_coherent(ipa_ctx->pdev,
ep->connect.data.size,
ep->connect.data.base,
ep->connect.data.phys_base);
else
ipa_pipe_mem_free(ep->data_fifo_pipe_mem_ofst,
ep->connect.data.size);
}
if (!ipa_ctx->smmu_s1_bypass) {
base = ep->connect.desc.iova;
smmu_domain = ipa2_get_smmu_domain();
if (smmu_domain != NULL) {
iommu_unmap(smmu_domain,
rounddown(base, PAGE_SIZE),
roundup(ep->connect.desc.size + base -
rounddown(base, PAGE_SIZE), PAGE_SIZE));
}
}
if (!ipa_ctx->smmu_s1_bypass) {
base = ep->connect.data.iova;
smmu_domain = ipa2_get_smmu_domain();
if (smmu_domain != NULL) {
iommu_unmap(smmu_domain,
rounddown(base, PAGE_SIZE),
roundup(ep->connect.data.size + base -
rounddown(base, PAGE_SIZE), PAGE_SIZE));
}
}
result = sps_free_endpoint(ep->ep_hdl);
if (result) {
IPAERR("SPS de-alloc EP failed.\n");
return -EPERM;
}
ipa_delete_dflt_flt_rules(clnt_hdl);
/* If APPS flow control is not enabled, send a message to modem to
* enable flow control honoring.
*/
if (!ipa_ctx->tethered_flow_control && ep->qmi_request_sent) {
/* Send a message to modem to disable flow control honoring. */
req.request_id = clnt_hdl;
res = qmi_disable_force_clear_datapath_send(&req);
if (res) {
IPADBG("disable_force_clear_datapath failed %d\n",
res);
}
}
spin_lock(&ipa_ctx->disconnect_lock);
memset(&ipa_ctx->ep[clnt_hdl], 0, sizeof(struct ipa_ep_context));
spin_unlock(&ipa_ctx->disconnect_lock);
IPA_ACTIVE_CLIENTS_DEC_EP(client_type);
IPADBG("client (ep: %d) disconnected\n", clnt_hdl);
return 0;
}
/**
* ipa2_reset_endpoint() - reset an endpoint from BAM perspective
* @clnt_hdl: [in] IPA client handle
*
* Returns: 0 on success, negative on failure
*
* Note: Should not be called from atomic context
*/
int ipa2_reset_endpoint(u32 clnt_hdl)
{
int res;
struct ipa_ep_context *ep;
if (unlikely(!ipa_ctx)) {
IPAERR("IPA driver was not initialized\n");
return -EINVAL;
}
if (clnt_hdl >= ipa_ctx->ipa_num_pipes) {
IPAERR("Bad parameters.\n");
return -EFAULT;
}
ep = &ipa_ctx->ep[clnt_hdl];
IPA_ACTIVE_CLIENTS_INC_EP(ipa2_get_client_mapping(clnt_hdl));
res = sps_disconnect(ep->ep_hdl);
if (res) {
IPAERR("sps_disconnect() failed, res=%d.\n", res);
goto bail;
} else {
res = ipa_sps_connect_safe(ep->ep_hdl, &ep->connect,
ep->client);
if (res) {
IPAERR("sps_connect() failed, res=%d.\n", res);
goto bail;
}
}
bail:
IPA_ACTIVE_CLIENTS_DEC_EP(ipa2_get_client_mapping(clnt_hdl));
return res;
}
/**
* ipa2_clear_endpoint_delay() - Remove ep delay set on the IPA pipe before
* client disconnect.
* @clnt_hdl: [in] opaque client handle assigned by IPA to client
*
* Should be called by the driver of the peripheral that wants to remove
* ep delay on IPA consumer ipe before disconnect in BAM-BAM mode. this api
* expects caller to take responsibility to free any needed headers, routing
* and filtering tables and rules as needed.
*
* Returns: 0 on success, negative on failure
*
* Note: Should not be called from atomic context
*/
int ipa2_clear_endpoint_delay(u32 clnt_hdl)
{
struct ipa_ep_context *ep;
struct ipa_ep_cfg_ctrl ep_ctrl = {0};
struct ipa_enable_force_clear_datapath_req_msg_v01 req = {0};
int res;
if (unlikely(!ipa_ctx)) {
IPAERR("IPA driver was not initialized\n");
return -EINVAL;
}
if (clnt_hdl >= ipa_ctx->ipa_num_pipes ||
ipa_ctx->ep[clnt_hdl].valid == 0) {
IPAERR("bad parm.\n");
return -EINVAL;
}
ep = &ipa_ctx->ep[clnt_hdl];
if (!ipa_ctx->tethered_flow_control) {
IPADBG("APPS flow control is not enabled\n");
/* Send a message to modem to disable flow control honoring. */
req.request_id = clnt_hdl;
req.source_pipe_bitmask = 1 << clnt_hdl;
res = qmi_enable_force_clear_datapath_send(&req);
if (res) {
IPADBG("enable_force_clear_datapath failed %d\n",
res);
}
ep->qmi_request_sent = true;
}
IPA_ACTIVE_CLIENTS_INC_EP(ipa2_get_client_mapping(clnt_hdl));
/* Set disconnect in progress flag so further flow control events are
* not honored.
*/
spin_lock(&ipa_ctx->disconnect_lock);
ep->disconnect_in_progress = true;
spin_unlock(&ipa_ctx->disconnect_lock);
/* If flow is disabled at this point, restore the ep state.*/
ep_ctrl.ipa_ep_delay = false;
ep_ctrl.ipa_ep_suspend = false;
ipa2_cfg_ep_ctrl(clnt_hdl, &ep_ctrl);
IPA_ACTIVE_CLIENTS_DEC_EP(ipa2_get_client_mapping(clnt_hdl));
IPADBG("client (ep: %d) removed ep delay\n", clnt_hdl);
return 0;
}
/**
* ipa2_disable_endpoint() - low-level IPA client disable endpoint
* @clnt_hdl: [in] opaque client handle assigned by IPA to client
*
* Should be called by the driver of the peripheral that wants to
* disable the pipe from IPA in BAM-BAM mode.
*
* Returns: 0 on success, negative on failure
*
* Note: Should not be called from atomic context
*/
int ipa2_disable_endpoint(u32 clnt_hdl)
{
int result;
struct ipa_ep_context *ep;
enum ipa_client_type client_type;
unsigned long bam;
if (unlikely(!ipa_ctx)) {
IPAERR("IPA driver was not initialized\n");
return -EINVAL;
}
if (clnt_hdl >= ipa_ctx->ipa_num_pipes ||
ipa_ctx->ep[clnt_hdl].valid == 0) {
IPAERR("bad parm.\n");
return -EINVAL;
}
ep = &ipa_ctx->ep[clnt_hdl];
client_type = ipa2_get_client_mapping(clnt_hdl);
IPA_ACTIVE_CLIENTS_INC_EP(client_type);
/* Set Disconnect in Progress flag. */
spin_lock(&ipa_ctx->disconnect_lock);
ep->disconnect_in_progress = true;
spin_unlock(&ipa_ctx->disconnect_lock);
/* Notify uc to stop monitoring holb on USB BAM Producer pipe. */
if (IPA_CLIENT_IS_USB_CONS(ep->client)) {
ipa_uc_monitor_holb(ep->client, false);
IPADBG("Disabling holb monitor for client: %d\n", ep->client);
}
result = ipa_disable_data_path(clnt_hdl);
if (result) {
IPAERR("disable data path failed res=%d clnt=%d.\n", result,
clnt_hdl);
goto fail;
}
if (IPA_CLIENT_IS_CONS(ep->client))
bam = ep->connect.source;
else
bam = ep->connect.destination;
result = sps_pipe_reset(bam, clnt_hdl);
if (result) {
IPAERR("SPS pipe reset failed.\n");
goto fail;
}
ep->ep_disabled = true;
IPA_ACTIVE_CLIENTS_DEC_EP(client_type);
IPADBG("client (ep: %d) disabled\n", clnt_hdl);
return 0;
fail:
IPA_ACTIVE_CLIENTS_DEC_EP(client_type);
return -EPERM;
}
/**
* ipa_sps_connect_safe() - connect endpoint from BAM prespective
* @h: [in] sps pipe handle
* @connect: [in] sps connect parameters
* @ipa_client: [in] ipa client handle representing the pipe
*
* This function connects a BAM pipe using SPS driver sps_connect() API
* and by requesting uC interface to reset the pipe, avoids an IPA HW
* limitation that does not allow resetting a BAM pipe during traffic in
* IPA TX command queue.
*
* Returns: 0 on success, negative on failure
*/
int ipa_sps_connect_safe(struct sps_pipe *h, struct sps_connect *connect,
enum ipa_client_type ipa_client)
{
int res;
if (ipa_ctx->ipa_hw_type > IPA_HW_v2_5 || ipa_ctx->skip_uc_pipe_reset) {
IPADBG("uC pipe reset is not required\n");
} else {
res = ipa_uc_reset_pipe(ipa_client);
if (res)
return res;
}
return sps_connect(h, connect);
}
EXPORT_SYMBOL(ipa_sps_connect_safe);

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,884 @@
/* Copyright (c) 2015-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/debugfs.h>
#include <linux/export.h>
#include <linux/delay.h>
#include <linux/kernel.h>
#include <linux/msm_ipa.h>
#include <linux/mutex.h>
#include <linux/ipa.h>
#include "ipa_i.h"
#define IPA_DMA_POLLING_MIN_SLEEP_RX 1010
#define IPA_DMA_POLLING_MAX_SLEEP_RX 1050
#define IPA_DMA_SYS_DESC_MAX_FIFO_SZ 0x7FF8
#define IPA_DMA_MAX_PKT_SZ 0xFFFF
#define IPA_DMA_MAX_PENDING_SYNC (IPA_SYS_DESC_FIFO_SZ / \
sizeof(struct sps_iovec) - 1)
#define IPA_DMA_MAX_PENDING_ASYNC (IPA_DMA_SYS_DESC_MAX_FIFO_SZ / \
sizeof(struct sps_iovec) - 1)
#define IPADMA_DRV_NAME "ipa_dma"
#define IPADMA_DBG(fmt, args...) \
pr_debug(IPADMA_DRV_NAME " %s:%d " fmt, \
__func__, __LINE__, ## args)
#define IPADMA_ERR(fmt, args...) \
pr_err(IPADMA_DRV_NAME " %s:%d " fmt, __func__, __LINE__, ## args)
#define IPADMA_FUNC_ENTRY() \
IPADMA_DBG("ENTRY\n")
#define IPADMA_FUNC_EXIT() \
IPADMA_DBG("EXIT\n")
#ifdef CONFIG_DEBUG_FS
#define IPADMA_MAX_MSG_LEN 1024
static char dbg_buff[IPADMA_MAX_MSG_LEN];
static void ipa_dma_debugfs_init(void);
static void ipa_dma_debugfs_destroy(void);
#else
static void ipa_dma_debugfs_init(void) {}
static void ipa_dma_debugfs_destroy(void) {}
#endif
/**
* struct ipa_dma_xfer_wrapper - IPADMA transfer descr wrapper
* @phys_addr_src: physical address of the source data to copy
* @phys_addr_dest: physical address to store the copied data
* @len: len in bytes to copy
* @link: linked to the wrappers list on the proper(sync/async) cons pipe
* @xfer_done: completion object for sync_memcpy completion
* @callback: IPADMA client provided completion callback
* @user1: cookie1 for above callback
*
* This struct can wrap both sync and async memcpy transfers descriptors.
*/
struct ipa_dma_xfer_wrapper {
u64 phys_addr_src;
u64 phys_addr_dest;
u16 len;
struct list_head link;
struct completion xfer_done;
void (*callback)(void *user1);
void *user1;
};
/**
* struct ipa_dma_ctx -IPADMA driver context information
* @is_enabled:is ipa_dma enabled?
* @destroy_pending: destroy ipa_dma after handling all pending memcpy
* @ipa_dma_xfer_wrapper_cache: cache of ipa_dma_xfer_wrapper structs
* @sync_lock: lock for synchronisation in sync_memcpy
* @async_lock: lock for synchronisation in async_memcpy
* @enable_lock: lock for is_enabled
* @pending_lock: lock for synchronize is_enable and pending_cnt
* @done: no pending works-ipadma can be destroyed
* @ipa_dma_sync_prod_hdl: handle of sync memcpy producer
* @ipa_dma_async_prod_hdl:handle of async memcpy producer
* @ipa_dma_sync_cons_hdl: handle of sync memcpy consumer
* @sync_memcpy_pending_cnt: number of pending sync memcopy operations
* @async_memcpy_pending_cnt: number of pending async memcopy operations
* @uc_memcpy_pending_cnt: number of pending uc memcopy operations
* @total_sync_memcpy: total number of sync memcpy (statistics)
* @total_async_memcpy: total number of async memcpy (statistics)
* @total_uc_memcpy: total number of uc memcpy (statistics)
*/
struct ipa_dma_ctx {
bool is_enabled;
bool destroy_pending;
struct kmem_cache *ipa_dma_xfer_wrapper_cache;
struct mutex sync_lock;
spinlock_t async_lock;
struct mutex enable_lock;
spinlock_t pending_lock;
struct completion done;
u32 ipa_dma_sync_prod_hdl;
u32 ipa_dma_async_prod_hdl;
u32 ipa_dma_sync_cons_hdl;
u32 ipa_dma_async_cons_hdl;
atomic_t sync_memcpy_pending_cnt;
atomic_t async_memcpy_pending_cnt;
atomic_t uc_memcpy_pending_cnt;
atomic_t total_sync_memcpy;
atomic_t total_async_memcpy;
atomic_t total_uc_memcpy;
};
static struct ipa_dma_ctx *ipa_dma_ctx;
/**
* ipa2_dma_init() -Initialize IPADMA.
*
* This function initialize all IPADMA internal data and connect in dma:
* MEMCPY_DMA_SYNC_PROD ->MEMCPY_DMA_SYNC_CONS
* MEMCPY_DMA_ASYNC_PROD->MEMCPY_DMA_SYNC_CONS
*
* Return codes: 0: success
* -EFAULT: IPADMA is already initialized
* -ENOMEM: allocating memory error
* -EPERM: pipe connection failed
*/
int ipa2_dma_init(void)
{
struct ipa_dma_ctx *ipa_dma_ctx_t;
struct ipa_sys_connect_params sys_in;
int res = 0;
IPADMA_FUNC_ENTRY();
if (ipa_dma_ctx) {
IPADMA_ERR("Already initialized.\n");
return -EFAULT;
}
ipa_dma_ctx_t = kzalloc(sizeof(*(ipa_dma_ctx)), GFP_KERNEL);
if (!ipa_dma_ctx_t) {
IPADMA_ERR("kzalloc error.\n");
return -ENOMEM;
}
ipa_dma_ctx_t->ipa_dma_xfer_wrapper_cache =
kmem_cache_create("IPA_DMA_XFER_WRAPPER",
sizeof(struct ipa_dma_xfer_wrapper), 0, 0, NULL);
if (!ipa_dma_ctx_t->ipa_dma_xfer_wrapper_cache) {
IPAERR(":failed to create ipa dma xfer wrapper cache.\n");
res = -ENOMEM;
goto fail_mem_ctrl;
}
mutex_init(&ipa_dma_ctx_t->enable_lock);
spin_lock_init(&ipa_dma_ctx_t->async_lock);
mutex_init(&ipa_dma_ctx_t->sync_lock);
spin_lock_init(&ipa_dma_ctx_t->pending_lock);
init_completion(&ipa_dma_ctx_t->done);
ipa_dma_ctx_t->is_enabled = false;
ipa_dma_ctx_t->destroy_pending = false;
atomic_set(&ipa_dma_ctx_t->async_memcpy_pending_cnt, 0);
atomic_set(&ipa_dma_ctx_t->sync_memcpy_pending_cnt, 0);
atomic_set(&ipa_dma_ctx_t->uc_memcpy_pending_cnt, 0);
atomic_set(&ipa_dma_ctx_t->total_async_memcpy, 0);
atomic_set(&ipa_dma_ctx_t->total_sync_memcpy, 0);
atomic_set(&ipa_dma_ctx_t->total_uc_memcpy, 0);
/* IPADMA SYNC PROD-source for sync memcpy */
memset(&sys_in, 0, sizeof(struct ipa_sys_connect_params));
sys_in.client = IPA_CLIENT_MEMCPY_DMA_SYNC_PROD;
sys_in.desc_fifo_sz = IPA_SYS_DESC_FIFO_SZ;
sys_in.ipa_ep_cfg.mode.mode = IPA_DMA;
sys_in.ipa_ep_cfg.mode.dst = IPA_CLIENT_MEMCPY_DMA_SYNC_CONS;
sys_in.skip_ep_cfg = false;
if (ipa2_setup_sys_pipe(&sys_in,
&ipa_dma_ctx_t->ipa_dma_sync_prod_hdl)) {
IPADMA_ERR(":setup sync prod pipe failed\n");
res = -EPERM;
goto fail_sync_prod;
}
/* IPADMA SYNC CONS-destination for sync memcpy */
memset(&sys_in, 0, sizeof(struct ipa_sys_connect_params));
sys_in.client = IPA_CLIENT_MEMCPY_DMA_SYNC_CONS;
sys_in.desc_fifo_sz = IPA_SYS_DESC_FIFO_SZ;
sys_in.skip_ep_cfg = false;
sys_in.ipa_ep_cfg.mode.mode = IPA_BASIC;
sys_in.notify = NULL;
sys_in.priv = NULL;
if (ipa2_setup_sys_pipe(&sys_in,
&ipa_dma_ctx_t->ipa_dma_sync_cons_hdl)) {
IPADMA_ERR(":setup sync cons pipe failed.\n");
res = -EPERM;
goto fail_sync_cons;
}
IPADMA_DBG("SYNC MEMCPY pipes are connected\n");
/* IPADMA ASYNC PROD-source for sync memcpy */
memset(&sys_in, 0, sizeof(struct ipa_sys_connect_params));
sys_in.client = IPA_CLIENT_MEMCPY_DMA_ASYNC_PROD;
sys_in.desc_fifo_sz = IPA_DMA_SYS_DESC_MAX_FIFO_SZ;
sys_in.ipa_ep_cfg.mode.mode = IPA_DMA;
sys_in.ipa_ep_cfg.mode.dst = IPA_CLIENT_MEMCPY_DMA_ASYNC_CONS;
sys_in.skip_ep_cfg = false;
sys_in.notify = NULL;
if (ipa2_setup_sys_pipe(&sys_in,
&ipa_dma_ctx_t->ipa_dma_async_prod_hdl)) {
IPADMA_ERR(":setup async prod pipe failed.\n");
res = -EPERM;
goto fail_async_prod;
}
/* IPADMA ASYNC CONS-destination for sync memcpy */
memset(&sys_in, 0, sizeof(struct ipa_sys_connect_params));
sys_in.client = IPA_CLIENT_MEMCPY_DMA_ASYNC_CONS;
sys_in.desc_fifo_sz = IPA_DMA_SYS_DESC_MAX_FIFO_SZ;
sys_in.skip_ep_cfg = false;
sys_in.ipa_ep_cfg.mode.mode = IPA_BASIC;
sys_in.notify = ipa_dma_async_memcpy_notify_cb;
sys_in.priv = NULL;
if (ipa2_setup_sys_pipe(&sys_in,
&ipa_dma_ctx_t->ipa_dma_async_cons_hdl)) {
IPADMA_ERR(":setup async cons pipe failed.\n");
res = -EPERM;
goto fail_async_cons;
}
ipa_dma_debugfs_init();
ipa_dma_ctx = ipa_dma_ctx_t;
IPADMA_DBG("ASYNC MEMCPY pipes are connected\n");
IPADMA_FUNC_EXIT();
return res;
fail_async_cons:
ipa2_teardown_sys_pipe(ipa_dma_ctx_t->ipa_dma_async_prod_hdl);
fail_async_prod:
ipa2_teardown_sys_pipe(ipa_dma_ctx_t->ipa_dma_sync_cons_hdl);
fail_sync_cons:
ipa2_teardown_sys_pipe(ipa_dma_ctx_t->ipa_dma_sync_prod_hdl);
fail_sync_prod:
kmem_cache_destroy(ipa_dma_ctx_t->ipa_dma_xfer_wrapper_cache);
fail_mem_ctrl:
kfree(ipa_dma_ctx_t);
ipa_dma_ctx = NULL;
return res;
}
/**
* ipa2_dma_enable() -Vote for IPA clocks.
*
*Return codes: 0: success
* -EINVAL: IPADMA is not initialized
* -EPERM: Operation not permitted as ipa_dma is already
* enabled
*/
int ipa2_dma_enable(void)
{
IPADMA_FUNC_ENTRY();
if (ipa_dma_ctx == NULL) {
IPADMA_ERR("IPADMA isn't initialized, can't enable\n");
return -EPERM;
}
mutex_lock(&ipa_dma_ctx->enable_lock);
if (ipa_dma_ctx->is_enabled) {
IPADMA_DBG("Already enabled.\n");
mutex_unlock(&ipa_dma_ctx->enable_lock);
return -EPERM;
}
IPA_ACTIVE_CLIENTS_INC_SPECIAL("DMA");
ipa_dma_ctx->is_enabled = true;
mutex_unlock(&ipa_dma_ctx->enable_lock);
IPADMA_FUNC_EXIT();
return 0;
}
static bool ipa_dma_work_pending(void)
{
if (atomic_read(&ipa_dma_ctx->sync_memcpy_pending_cnt)) {
IPADMA_DBG("pending sync\n");
return true;
}
if (atomic_read(&ipa_dma_ctx->async_memcpy_pending_cnt)) {
IPADMA_DBG("pending async\n");
return true;
}
if (atomic_read(&ipa_dma_ctx->uc_memcpy_pending_cnt)) {
IPADMA_DBG("pending uc\n");
return true;
}
IPADMA_DBG("no pending work\n");
return false;
}
/**
* ipa2_dma_disable()- Unvote for IPA clocks.
*
* enter to power save mode.
*
* Return codes: 0: success
* -EINVAL: IPADMA is not initialized
* -EPERM: Operation not permitted as ipa_dma is already
* diabled
* -EFAULT: can not disable ipa_dma as there are pending
* memcopy works
*/
int ipa2_dma_disable(void)
{
unsigned long flags;
IPADMA_FUNC_ENTRY();
if (ipa_dma_ctx == NULL) {
IPADMA_ERR("IPADMA isn't initialized, can't disable\n");
return -EPERM;
}
mutex_lock(&ipa_dma_ctx->enable_lock);
spin_lock_irqsave(&ipa_dma_ctx->pending_lock, flags);
if (!ipa_dma_ctx->is_enabled) {
IPADMA_DBG("Already disabled.\n");
spin_unlock_irqrestore(&ipa_dma_ctx->pending_lock, flags);
mutex_unlock(&ipa_dma_ctx->enable_lock);
return -EPERM;
}
if (ipa_dma_work_pending()) {
IPADMA_ERR("There is pending work, can't disable.\n");
spin_unlock_irqrestore(&ipa_dma_ctx->pending_lock, flags);
mutex_unlock(&ipa_dma_ctx->enable_lock);
return -EFAULT;
}
ipa_dma_ctx->is_enabled = false;
spin_unlock_irqrestore(&ipa_dma_ctx->pending_lock, flags);
IPA_ACTIVE_CLIENTS_DEC_SPECIAL("DMA");
mutex_unlock(&ipa_dma_ctx->enable_lock);
IPADMA_FUNC_EXIT();
return 0;
}
/**
* ipa2_dma_sync_memcpy()- Perform synchronous memcpy using IPA.
*
* @dest: physical address to store the copied data.
* @src: physical address of the source data to copy.
* @len: number of bytes to copy.
*
* Return codes: 0: success
* -EINVAL: invalid params
* -EPERM: operation not permitted as ipa_dma isn't enable or
* initialized
* -SPS_ERROR: on sps faliures
* -EFAULT: other
*/
int ipa2_dma_sync_memcpy(u64 dest, u64 src, int len)
{
int ep_idx;
int res;
int i = 0;
struct ipa_sys_context *cons_sys;
struct ipa_sys_context *prod_sys;
struct sps_iovec iov;
struct ipa_dma_xfer_wrapper *xfer_descr = NULL;
struct ipa_dma_xfer_wrapper *head_descr = NULL;
unsigned long flags;
IPADMA_FUNC_ENTRY();
if (ipa_dma_ctx == NULL) {
IPADMA_ERR("IPADMA isn't initialized, can't memcpy\n");
return -EPERM;
}
if ((max(src, dest) - min(src, dest)) < len) {
IPADMA_ERR("invalid addresses - overlapping buffers\n");
return -EINVAL;
}
if (len > IPA_DMA_MAX_PKT_SZ || len <= 0) {
IPADMA_ERR("invalid len, %d\n", len);
return -EINVAL;
}
if (((u32)src != src) || ((u32)dest != dest)) {
IPADMA_ERR("Bad addr - only 32b addr supported for BAM");
return -EINVAL;
}
spin_lock_irqsave(&ipa_dma_ctx->pending_lock, flags);
if (!ipa_dma_ctx->is_enabled) {
IPADMA_ERR("can't memcpy, IPADMA isn't enabled\n");
spin_unlock_irqrestore(&ipa_dma_ctx->pending_lock, flags);
return -EPERM;
}
atomic_inc(&ipa_dma_ctx->sync_memcpy_pending_cnt);
spin_unlock_irqrestore(&ipa_dma_ctx->pending_lock, flags);
if (atomic_read(&ipa_dma_ctx->sync_memcpy_pending_cnt) >=
IPA_DMA_MAX_PENDING_SYNC) {
atomic_dec(&ipa_dma_ctx->sync_memcpy_pending_cnt);
IPADMA_DBG("Reached pending requests limit\n");
return -EFAULT;
}
ep_idx = ipa2_get_ep_mapping(IPA_CLIENT_MEMCPY_DMA_SYNC_CONS);
if (-1 == ep_idx) {
IPADMA_ERR("Client %u is not mapped\n",
IPA_CLIENT_MEMCPY_DMA_SYNC_CONS);
return -EFAULT;
}
cons_sys = ipa_ctx->ep[ep_idx].sys;
ep_idx = ipa2_get_ep_mapping(IPA_CLIENT_MEMCPY_DMA_SYNC_PROD);
if (-1 == ep_idx) {
IPADMA_ERR("Client %u is not mapped\n",
IPA_CLIENT_MEMCPY_DMA_SYNC_PROD);
return -EFAULT;
}
prod_sys = ipa_ctx->ep[ep_idx].sys;
xfer_descr = kmem_cache_zalloc(ipa_dma_ctx->ipa_dma_xfer_wrapper_cache,
GFP_KERNEL);
if (!xfer_descr) {
IPADMA_ERR("failed to alloc xfer descr wrapper\n");
res = -ENOMEM;
goto fail_mem_alloc;
}
xfer_descr->phys_addr_dest = dest;
xfer_descr->phys_addr_src = src;
xfer_descr->len = len;
init_completion(&xfer_descr->xfer_done);
mutex_lock(&ipa_dma_ctx->sync_lock);
list_add_tail(&xfer_descr->link, &cons_sys->head_desc_list);
cons_sys->len++;
res = sps_transfer_one(cons_sys->ep->ep_hdl, dest, len, NULL, 0);
if (res) {
IPADMA_ERR("Failed: sps_transfer_one on dest descr\n");
goto fail_sps_send;
}
res = sps_transfer_one(prod_sys->ep->ep_hdl, src, len,
NULL, SPS_IOVEC_FLAG_EOT);
if (res) {
IPADMA_ERR("Failed: sps_transfer_one on src descr\n");
BUG();
}
head_descr = list_first_entry(&cons_sys->head_desc_list,
struct ipa_dma_xfer_wrapper, link);
/* in case we are not the head of the list, wait for head to wake us */
if (xfer_descr != head_descr) {
mutex_unlock(&ipa_dma_ctx->sync_lock);
wait_for_completion(&xfer_descr->xfer_done);
mutex_lock(&ipa_dma_ctx->sync_lock);
head_descr = list_first_entry(&cons_sys->head_desc_list,
struct ipa_dma_xfer_wrapper, link);
BUG_ON(xfer_descr != head_descr);
}
mutex_unlock(&ipa_dma_ctx->sync_lock);
do {
/* wait for transfer to complete */
res = sps_get_iovec(cons_sys->ep->ep_hdl, &iov);
if (res)
IPADMA_ERR("Failed: get_iovec, returned %d loop#:%d\n"
, res, i);
usleep_range(IPA_DMA_POLLING_MIN_SLEEP_RX,
IPA_DMA_POLLING_MAX_SLEEP_RX);
i++;
} while (iov.addr == 0);
mutex_lock(&ipa_dma_ctx->sync_lock);
list_del(&head_descr->link);
cons_sys->len--;
kmem_cache_free(ipa_dma_ctx->ipa_dma_xfer_wrapper_cache, xfer_descr);
/* wake the head of the list */
if (!list_empty(&cons_sys->head_desc_list)) {
head_descr = list_first_entry(&cons_sys->head_desc_list,
struct ipa_dma_xfer_wrapper, link);
complete(&head_descr->xfer_done);
}
mutex_unlock(&ipa_dma_ctx->sync_lock);
BUG_ON(dest != iov.addr);
BUG_ON(len != iov.size);
atomic_inc(&ipa_dma_ctx->total_sync_memcpy);
atomic_dec(&ipa_dma_ctx->sync_memcpy_pending_cnt);
if (ipa_dma_ctx->destroy_pending && !ipa_dma_work_pending())
complete(&ipa_dma_ctx->done);
IPADMA_FUNC_EXIT();
return res;
fail_sps_send:
list_del(&xfer_descr->link);
cons_sys->len--;
mutex_unlock(&ipa_dma_ctx->sync_lock);
kmem_cache_free(ipa_dma_ctx->ipa_dma_xfer_wrapper_cache, xfer_descr);
fail_mem_alloc:
atomic_dec(&ipa_dma_ctx->sync_memcpy_pending_cnt);
if (ipa_dma_ctx->destroy_pending && !ipa_dma_work_pending())
complete(&ipa_dma_ctx->done);
return res;
}
/**
* ipa2_dma_async_memcpy()- Perform asynchronous memcpy using IPA.
*
* @dest: physical address to store the copied data.
* @src: physical address of the source data to copy.
* @len: number of bytes to copy.
* @user_cb: callback function to notify the client when the copy was done.
* @user_param: cookie for user_cb.
*
* Return codes: 0: success
* -EINVAL: invalid params
* -EPERM: operation not permitted as ipa_dma isn't enable or
* initialized
* -SPS_ERROR: on sps faliures
* -EFAULT: descr fifo is full.
*/
int ipa2_dma_async_memcpy(u64 dest, u64 src, int len,
void (*user_cb)(void *user1), void *user_param)
{
int ep_idx;
int res = 0;
struct ipa_dma_xfer_wrapper *xfer_descr = NULL;
struct ipa_sys_context *prod_sys;
struct ipa_sys_context *cons_sys;
unsigned long flags;
IPADMA_FUNC_ENTRY();
if (ipa_dma_ctx == NULL) {
IPADMA_ERR("IPADMA isn't initialized, can't memcpy\n");
return -EPERM;
}
if ((max(src, dest) - min(src, dest)) < len) {
IPADMA_ERR("invalid addresses - overlapping buffers\n");
return -EINVAL;
}
if (len > IPA_DMA_MAX_PKT_SZ || len <= 0) {
IPADMA_ERR("invalid len, %d\n", len);
return -EINVAL;
}
if (((u32)src != src) || ((u32)dest != dest)) {
IPADMA_ERR("Bad addr - only 32b addr supported for BAM");
return -EINVAL;
}
if (!user_cb) {
IPADMA_ERR("null pointer: user_cb\n");
return -EINVAL;
}
spin_lock_irqsave(&ipa_dma_ctx->pending_lock, flags);
if (!ipa_dma_ctx->is_enabled) {
IPADMA_ERR("can't memcpy, IPA_DMA isn't enabled\n");
spin_unlock_irqrestore(&ipa_dma_ctx->pending_lock, flags);
return -EPERM;
}
atomic_inc(&ipa_dma_ctx->async_memcpy_pending_cnt);
spin_unlock_irqrestore(&ipa_dma_ctx->pending_lock, flags);
if (atomic_read(&ipa_dma_ctx->async_memcpy_pending_cnt) >=
IPA_DMA_MAX_PENDING_ASYNC) {
atomic_dec(&ipa_dma_ctx->async_memcpy_pending_cnt);
IPADMA_DBG("Reached pending requests limit\n");
return -EFAULT;
}
ep_idx = ipa2_get_ep_mapping(IPA_CLIENT_MEMCPY_DMA_ASYNC_CONS);
if (-1 == ep_idx) {
IPADMA_ERR("Client %u is not mapped\n",
IPA_CLIENT_MEMCPY_DMA_ASYNC_CONS);
return -EFAULT;
}
cons_sys = ipa_ctx->ep[ep_idx].sys;
ep_idx = ipa2_get_ep_mapping(IPA_CLIENT_MEMCPY_DMA_ASYNC_PROD);
if (-1 == ep_idx) {
IPADMA_ERR("Client %u is not mapped\n",
IPA_CLIENT_MEMCPY_DMA_SYNC_PROD);
return -EFAULT;
}
prod_sys = ipa_ctx->ep[ep_idx].sys;
xfer_descr = kmem_cache_zalloc(ipa_dma_ctx->ipa_dma_xfer_wrapper_cache,
GFP_KERNEL);
if (!xfer_descr) {
IPADMA_ERR("failed to alloc xfrer descr wrapper\n");
res = -ENOMEM;
goto fail_mem_alloc;
}
xfer_descr->phys_addr_dest = dest;
xfer_descr->phys_addr_src = src;
xfer_descr->len = len;
xfer_descr->callback = user_cb;
xfer_descr->user1 = user_param;
spin_lock_irqsave(&ipa_dma_ctx->async_lock, flags);
list_add_tail(&xfer_descr->link, &cons_sys->head_desc_list);
cons_sys->len++;
res = sps_transfer_one(cons_sys->ep->ep_hdl, dest, len, xfer_descr, 0);
if (res) {
IPADMA_ERR("Failed: sps_transfer_one on dest descr\n");
goto fail_sps_send;
}
res = sps_transfer_one(prod_sys->ep->ep_hdl, src, len,
NULL, SPS_IOVEC_FLAG_EOT);
if (res) {
IPADMA_ERR("Failed: sps_transfer_one on src descr\n");
BUG();
goto fail_sps_send;
}
spin_unlock_irqrestore(&ipa_dma_ctx->async_lock, flags);
IPADMA_FUNC_EXIT();
return res;
fail_sps_send:
list_del(&xfer_descr->link);
spin_unlock_irqrestore(&ipa_dma_ctx->async_lock, flags);
kmem_cache_free(ipa_dma_ctx->ipa_dma_xfer_wrapper_cache, xfer_descr);
fail_mem_alloc:
atomic_dec(&ipa_dma_ctx->async_memcpy_pending_cnt);
if (ipa_dma_ctx->destroy_pending && !ipa_dma_work_pending())
complete(&ipa_dma_ctx->done);
return res;
}
/**
* ipa2_dma_uc_memcpy() - Perform a memcpy action using IPA uC
* @dest: physical address to store the copied data.
* @src: physical address of the source data to copy.
* @len: number of bytes to copy.
*
* Return codes: 0: success
* -EINVAL: invalid params
* -EPERM: operation not permitted as ipa_dma isn't enable or
* initialized
* -EBADF: IPA uC is not loaded
*/
int ipa2_dma_uc_memcpy(phys_addr_t dest, phys_addr_t src, int len)
{
int res;
unsigned long flags;
IPADMA_FUNC_ENTRY();
if (ipa_dma_ctx == NULL) {
IPADMA_ERR("IPADMA isn't initialized, can't memcpy\n");
return -EPERM;
}
if ((max(src, dest) - min(src, dest)) < len) {
IPADMA_ERR("invalid addresses - overlapping buffers\n");
return -EINVAL;
}
if (len > IPA_DMA_MAX_PKT_SZ || len <= 0) {
IPADMA_ERR("invalid len, %d\n", len);
return -EINVAL;
}
spin_lock_irqsave(&ipa_dma_ctx->pending_lock, flags);
if (!ipa_dma_ctx->is_enabled) {
IPADMA_ERR("can't memcpy, IPADMA isn't enabled\n");
spin_unlock_irqrestore(&ipa_dma_ctx->pending_lock, flags);
return -EPERM;
}
atomic_inc(&ipa_dma_ctx->uc_memcpy_pending_cnt);
spin_unlock_irqrestore(&ipa_dma_ctx->pending_lock, flags);
res = ipa_uc_memcpy(dest, src, len);
if (res) {
IPADMA_ERR("ipa_uc_memcpy failed %d\n", res);
goto dec_and_exit;
}
atomic_inc(&ipa_dma_ctx->total_uc_memcpy);
res = 0;
dec_and_exit:
atomic_dec(&ipa_dma_ctx->uc_memcpy_pending_cnt);
if (ipa_dma_ctx->destroy_pending && !ipa_dma_work_pending())
complete(&ipa_dma_ctx->done);
IPADMA_FUNC_EXIT();
return res;
}
/**
* ipa2_dma_destroy() -teardown IPADMA pipes and release ipadma.
*
* this is a blocking function, returns just after destroying IPADMA.
*/
void ipa2_dma_destroy(void)
{
int res = 0;
IPADMA_FUNC_ENTRY();
if (!ipa_dma_ctx) {
IPADMA_DBG("IPADMA isn't initialized\n");
return;
}
if (ipa_dma_work_pending()) {
ipa_dma_ctx->destroy_pending = true;
IPADMA_DBG("There are pending memcpy, wait for completion\n");
wait_for_completion(&ipa_dma_ctx->done);
}
res = ipa2_teardown_sys_pipe(ipa_dma_ctx->ipa_dma_async_cons_hdl);
if (res)
IPADMA_ERR("teardown IPADMA ASYNC CONS failed\n");
ipa_dma_ctx->ipa_dma_async_cons_hdl = 0;
res = ipa2_teardown_sys_pipe(ipa_dma_ctx->ipa_dma_sync_cons_hdl);
if (res)
IPADMA_ERR("teardown IPADMA SYNC CONS failed\n");
ipa_dma_ctx->ipa_dma_sync_cons_hdl = 0;
res = ipa2_teardown_sys_pipe(ipa_dma_ctx->ipa_dma_async_prod_hdl);
if (res)
IPADMA_ERR("teardown IPADMA ASYNC PROD failed\n");
ipa_dma_ctx->ipa_dma_async_prod_hdl = 0;
res = ipa2_teardown_sys_pipe(ipa_dma_ctx->ipa_dma_sync_prod_hdl);
if (res)
IPADMA_ERR("teardown IPADMA SYNC PROD failed\n");
ipa_dma_ctx->ipa_dma_sync_prod_hdl = 0;
ipa_dma_debugfs_destroy();
kmem_cache_destroy(ipa_dma_ctx->ipa_dma_xfer_wrapper_cache);
kfree(ipa_dma_ctx);
ipa_dma_ctx = NULL;
IPADMA_FUNC_EXIT();
}
/**
* ipa_dma_async_memcpy_notify_cb() -Callback function which will be called by
* IPA driver after getting notify from SPS driver or poll mode on Rx operation
* is completed (data was written to dest descriptor on async_cons ep).
*
* @priv -not in use.
* @evt - event name - IPA_RECIVE.
* @data -the iovec.
*/
void ipa_dma_async_memcpy_notify_cb(void *priv
, enum ipa_dp_evt_type evt, unsigned long data)
{
int ep_idx = 0;
struct sps_iovec *iov = (struct sps_iovec *) data;
struct ipa_dma_xfer_wrapper *xfer_descr_expected;
struct ipa_sys_context *sys;
unsigned long flags;
IPADMA_FUNC_ENTRY();
ep_idx = ipa2_get_ep_mapping(IPA_CLIENT_MEMCPY_DMA_ASYNC_CONS);
sys = ipa_ctx->ep[ep_idx].sys;
spin_lock_irqsave(&ipa_dma_ctx->async_lock, flags);
xfer_descr_expected = list_first_entry(&sys->head_desc_list,
struct ipa_dma_xfer_wrapper, link);
list_del(&xfer_descr_expected->link);
sys->len--;
spin_unlock_irqrestore(&ipa_dma_ctx->async_lock, flags);
BUG_ON(xfer_descr_expected->phys_addr_dest != iov->addr);
BUG_ON(xfer_descr_expected->len != iov->size);
atomic_inc(&ipa_dma_ctx->total_async_memcpy);
atomic_dec(&ipa_dma_ctx->async_memcpy_pending_cnt);
xfer_descr_expected->callback(xfer_descr_expected->user1);
kmem_cache_free(ipa_dma_ctx->ipa_dma_xfer_wrapper_cache,
xfer_descr_expected);
if (ipa_dma_ctx->destroy_pending && !ipa_dma_work_pending())
complete(&ipa_dma_ctx->done);
IPADMA_FUNC_EXIT();
}
#ifdef CONFIG_DEBUG_FS
static struct dentry *dent;
static struct dentry *dfile_info;
static ssize_t ipa_dma_debugfs_read(struct file *file, char __user *ubuf,
size_t count, loff_t *ppos)
{
int nbytes = 0;
if (!ipa_dma_ctx) {
nbytes += scnprintf(&dbg_buff[nbytes],
IPADMA_MAX_MSG_LEN - nbytes,
"Not initialized\n");
} else {
nbytes += scnprintf(&dbg_buff[nbytes],
IPADMA_MAX_MSG_LEN - nbytes,
"Status:\n IPADMA is %s\n",
(ipa_dma_ctx->is_enabled) ? "Enabled" : "Disabled");
nbytes += scnprintf(&dbg_buff[nbytes],
IPADMA_MAX_MSG_LEN - nbytes,
"Statistics:\n total sync memcpy: %d\n ",
atomic_read(&ipa_dma_ctx->total_sync_memcpy));
nbytes += scnprintf(&dbg_buff[nbytes],
IPADMA_MAX_MSG_LEN - nbytes,
"total async memcpy: %d\n ",
atomic_read(&ipa_dma_ctx->total_async_memcpy));
nbytes += scnprintf(&dbg_buff[nbytes],
IPADMA_MAX_MSG_LEN - nbytes,
"pending sync memcpy jobs: %d\n ",
atomic_read(&ipa_dma_ctx->sync_memcpy_pending_cnt));
nbytes += scnprintf(&dbg_buff[nbytes],
IPADMA_MAX_MSG_LEN - nbytes,
"pending async memcpy jobs: %d\n",
atomic_read(&ipa_dma_ctx->async_memcpy_pending_cnt));
nbytes += scnprintf(&dbg_buff[nbytes],
IPADMA_MAX_MSG_LEN - nbytes,
"pending uc memcpy jobs: %d\n",
atomic_read(&ipa_dma_ctx->uc_memcpy_pending_cnt));
}
return simple_read_from_buffer(ubuf, count, ppos, dbg_buff, nbytes);
}
static ssize_t ipa_dma_debugfs_reset_statistics(struct file *file,
const char __user *ubuf,
size_t count,
loff_t *ppos)
{
unsigned long missing;
s8 in_num = 0;
if (sizeof(dbg_buff) < count + 1)
return -EFAULT;
missing = copy_from_user(dbg_buff, ubuf, count);
if (missing)
return -EFAULT;
dbg_buff[count] = '\0';
if (kstrtos8(dbg_buff, 0, &in_num))
return -EFAULT;
switch (in_num) {
case 0:
if (ipa_dma_work_pending())
IPADMA_DBG("Note, there are pending memcpy\n");
atomic_set(&ipa_dma_ctx->total_async_memcpy, 0);
atomic_set(&ipa_dma_ctx->total_sync_memcpy, 0);
break;
default:
IPADMA_ERR("invalid argument: To reset statistics echo 0\n");
break;
}
return count;
}
const struct file_operations ipadma_stats_ops = {
.read = ipa_dma_debugfs_read,
.write = ipa_dma_debugfs_reset_statistics,
};
static void ipa_dma_debugfs_init(void)
{
const mode_t read_write_mode = S_IRUSR | S_IRGRP | S_IROTH |
S_IWUSR | S_IWGRP | S_IWOTH;
dent = debugfs_create_dir("ipa_dma", 0);
if (IS_ERR(dent)) {
IPADMA_ERR("fail to create folder ipa_dma\n");
return;
}
dfile_info =
debugfs_create_file("info", read_write_mode, dent,
0, &ipadma_stats_ops);
if (!dfile_info || IS_ERR(dfile_info)) {
IPADMA_ERR("fail to create file stats\n");
goto fail;
}
return;
fail:
debugfs_remove_recursive(dent);
}
static void ipa_dma_debugfs_destroy(void)
{
debugfs_remove_recursive(dent);
}
#endif /* !CONFIG_DEBUG_FS */

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,450 @@
/* Copyright (c) 2012-2015, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _IPA_HW_DEFS_H
#define _IPA_HW_DEFS_H
#include <linux/bitops.h>
/* This header defines various HW related data types */
/* immediate command op-codes */
#define IPA_DECIPH_INIT (1)
#define IPA_PPP_FRM_INIT (2)
#define IPA_IP_V4_FILTER_INIT (3)
#define IPA_IP_V6_FILTER_INIT (4)
#define IPA_IP_V4_NAT_INIT (5)
#define IPA_IP_V6_NAT_INIT (6)
#define IPA_IP_V4_ROUTING_INIT (7)
#define IPA_IP_V6_ROUTING_INIT (8)
#define IPA_HDR_INIT_LOCAL (9)
#define IPA_HDR_INIT_SYSTEM (10)
#define IPA_DECIPH_SETUP (11)
#define IPA_REGISTER_WRITE (12)
#define IPA_NAT_DMA (14)
#define IPA_IP_PACKET_TAG (15)
#define IPA_IP_PACKET_INIT (16)
#define IPA_DMA_SHARED_MEM (19)
#define IPA_IP_PACKET_TAG_STATUS (20)
/* Processing context TLV type */
#define IPA_PROC_CTX_TLV_TYPE_END 0
#define IPA_PROC_CTX_TLV_TYPE_HDR_ADD 1
#define IPA_PROC_CTX_TLV_TYPE_PROC_CMD 3
/**
* struct ipa_flt_rule_hw_hdr - HW header of IPA filter rule
* @word: filtering rule properties
* @en_rule: enable rule
* @action: post routing action
* @rt_tbl_idx: index in routing table
* @retain_hdr: added to add back to the packet the header removed
* as part of header removal. This will be done as part of
* header insertion block.
* @to_uc: direct IPA to sent the packet to uc instead of
* the intended destination. This will be performed just after
* routing block processing, so routing will have determined
* destination end point and uc will receive this information
* together with the packet as part of the HW packet TX commands
* @rsvd: reserved bits
*/
struct ipa_flt_rule_hw_hdr {
union {
u32 word;
struct {
u32 en_rule:16;
u32 action:5;
u32 rt_tbl_idx:5;
u32 retain_hdr:1;
u32 to_uc:1;
u32 rsvd:4;
} hdr;
} u;
};
/**
* struct ipa_rt_rule_hw_hdr - HW header of IPA routing rule
* @word: filtering rule properties
* @en_rule: enable rule
* @pipe_dest_idx: destination pipe index
* @system: changed from local to system due to HW change
* @hdr_offset: header offset
* @proc_ctx: whether hdr_offset points to header table or to
* header processing context table
*/
struct ipa_rt_rule_hw_hdr {
union {
u32 word;
struct {
u32 en_rule:16;
u32 pipe_dest_idx:5;
u32 system:1;
u32 hdr_offset:10;
} hdr;
struct {
u32 en_rule:16;
u32 pipe_dest_idx:5;
u32 system:1;
u32 hdr_offset:9;
u32 proc_ctx:1;
} hdr_v2_5;
} u;
};
/**
* struct ipa_ip_v4_filter_init - IPA_IP_V4_FILTER_INIT command payload
* @ipv4_rules_addr: address of ipv4 rules
* @size_ipv4_rules: size of the above
* @ipv4_addr: ipv4 address
* @rsvd: reserved
*/
struct ipa_ip_v4_filter_init {
u64 ipv4_rules_addr:32;
u64 size_ipv4_rules:12;
u64 ipv4_addr:16;
u64 rsvd:4;
};
/**
* struct ipa_ip_v6_filter_init - IPA_IP_V6_FILTER_INIT command payload
* @ipv6_rules_addr: address of ipv6 rules
* @size_ipv6_rules: size of the above
* @ipv6_addr: ipv6 address
*/
struct ipa_ip_v6_filter_init {
u64 ipv6_rules_addr:32;
u64 size_ipv6_rules:16;
u64 ipv6_addr:16;
};
/**
* struct ipa_ip_v4_routing_init - IPA_IP_V4_ROUTING_INIT command payload
* @ipv4_rules_addr: address of ipv4 rules
* @size_ipv4_rules: size of the above
* @ipv4_addr: ipv4 address
* @rsvd: reserved
*/
struct ipa_ip_v4_routing_init {
u64 ipv4_rules_addr:32;
u64 size_ipv4_rules:12;
u64 ipv4_addr:16;
u64 rsvd:4;
};
/**
* struct ipa_ip_v6_routing_init - IPA_IP_V6_ROUTING_INIT command payload
* @ipv6_rules_addr: address of ipv6 rules
* @size_ipv6_rules: size of the above
* @ipv6_addr: ipv6 address
*/
struct ipa_ip_v6_routing_init {
u64 ipv6_rules_addr:32;
u64 size_ipv6_rules:16;
u64 ipv6_addr:16;
};
/**
* struct ipa_hdr_init_local - IPA_HDR_INIT_LOCAL command payload
* @hdr_table_src_addr: word address of header table in system memory where the
* table starts (use as source for memory copying)
* @size_hdr_table: size of the above (in bytes)
* @hdr_table_dst_addr: header address in IPA sram (used as dst for memory copy)
* @rsvd: reserved
*/
struct ipa_hdr_init_local {
u64 hdr_table_src_addr:32;
u64 size_hdr_table:12;
u64 hdr_table_dst_addr:16;
u64 rsvd:4;
};
/**
* struct ipa_hdr_init_system - IPA_HDR_INIT_SYSTEM command payload
* @hdr_table_addr: word address of header table in system memory where the
* table starts (use as source for memory copying)
* @rsvd: reserved
*/
struct ipa_hdr_init_system {
u64 hdr_table_addr:32;
u64 rsvd:32;
};
/**
* struct ipa_hdr_proc_ctx_tlv -
* HW structure of IPA processing context header - TLV part
* @type: 0 - end type
* 1 - header addition type
* 3 - processing command type
* @length: number of bytes after tlv
* for type:
* 0 - needs to be 0
* 1 - header addition length
* 3 - number of 32B including type and length.
* @value: specific value for type
* for type:
* 0 - needs to be 0
* 1 - header length
* 3 - command ID (see IPA_HDR_UCP_* definitions)
*/
struct ipa_hdr_proc_ctx_tlv {
u32 type:8;
u32 length:8;
u32 value:16;
};
/**
* struct ipa_hdr_proc_ctx_hdr_add -
* HW structure of IPA processing context - add header tlv
* @tlv: IPA processing context TLV
* @hdr_addr: processing context header address
*/
struct ipa_hdr_proc_ctx_hdr_add {
struct ipa_hdr_proc_ctx_tlv tlv;
u32 hdr_addr;
};
#define IPA_A5_MUX_HDR_EXCP_FLAG_IP BIT(7)
#define IPA_A5_MUX_HDR_EXCP_FLAG_NAT BIT(6)
#define IPA_A5_MUX_HDR_EXCP_FLAG_SW_FLT BIT(5)
#define IPA_A5_MUX_HDR_EXCP_FLAG_TAG BIT(4)
#define IPA_A5_MUX_HDR_EXCP_FLAG_REPLICATED BIT(3)
#define IPA_A5_MUX_HDR_EXCP_FLAG_IHL BIT(2)
/**
* struct ipa_a5_mux_hdr - A5 MUX header definition
* @interface_id: interface ID
* @src_pipe_index: source pipe index
* @flags: flags
* @metadata: metadata
*
* A5 MUX header is in BE, A5 runs in LE. This struct definition
* allows A5 SW to correctly parse the header
*/
struct ipa_a5_mux_hdr {
u16 interface_id;
u8 src_pipe_index;
u8 flags;
u32 metadata;
};
/**
* struct ipa_register_write - IPA_REGISTER_WRITE command payload
* @rsvd: reserved
* @skip_pipeline_clear: 0 to wait until IPA pipeline is clear
* @offset: offset from IPA base address
* @value: value to write to register
* @value_mask: mask specifying which value bits to write to the register
*/
struct ipa_register_write {
u32 rsvd:15;
u32 skip_pipeline_clear:1;
u32 offset:16;
u32 value:32;
u32 value_mask:32;
};
/**
* struct ipa_nat_dma - IPA_NAT_DMA command payload
* @table_index: NAT table index
* @rsvd1: reserved
* @base_addr: base address
* @rsvd2: reserved
* @offset: offset
* @data: metadata
* @rsvd3: reserved
*/
struct ipa_nat_dma {
u64 table_index:3;
u64 rsvd1:1;
u64 base_addr:2;
u64 rsvd2:2;
u64 offset:32;
u64 data:16;
u64 rsvd3:8;
};
/**
* struct ipa_nat_dma - IPA_IP_PACKET_INIT command payload
* @destination_pipe_index: destination pipe index
* @rsvd1: reserved
* @metadata: metadata
* @rsvd2: reserved
*/
struct ipa_ip_packet_init {
u64 destination_pipe_index:5;
u64 rsvd1:3;
u64 metadata:32;
u64 rsvd2:24;
};
/**
* struct ipa_nat_dma - IPA_IP_V4_NAT_INIT command payload
* @ipv4_rules_addr: ipv4 rules address
* @ipv4_expansion_rules_addr: ipv4 expansion rules address
* @index_table_addr: index tables address
* @index_table_expansion_addr: index expansion table address
* @table_index: index in table
* @ipv4_rules_addr_type: ipv4 address type
* @ipv4_expansion_rules_addr_type: ipv4 expansion address type
* @index_table_addr_type: index table address type
* @index_table_expansion_addr_type: index expansion table type
* @size_base_tables: size of base tables
* @size_expansion_tables: size of expansion tables
* @rsvd2: reserved
* @public_ip_addr: public IP address
*/
struct ipa_ip_v4_nat_init {
u64 ipv4_rules_addr:32;
u64 ipv4_expansion_rules_addr:32;
u64 index_table_addr:32;
u64 index_table_expansion_addr:32;
u64 table_index:3;
u64 rsvd1:1;
u64 ipv4_rules_addr_type:1;
u64 ipv4_expansion_rules_addr_type:1;
u64 index_table_addr_type:1;
u64 index_table_expansion_addr_type:1;
u64 size_base_tables:12;
u64 size_expansion_tables:10;
u64 rsvd2:2;
u64 public_ip_addr:32;
};
/**
* struct ipa_ip_packet_tag - IPA_IP_PACKET_TAG command payload
* @tag: tag value returned with response
*/
struct ipa_ip_packet_tag {
u32 tag;
};
/**
* struct ipa_ip_packet_tag_status - IPA_IP_PACKET_TAG_STATUS command payload
* @rsvd: reserved
* @tag_f_1: tag value returned within status
* @tag_f_2: tag value returned within status
*/
struct ipa_ip_packet_tag_status {
u32 rsvd:16;
u32 tag_f_1:16;
u32 tag_f_2:32;
};
/*! @brief Struct for the IPAv2.0 and IPAv2.5 UL packet status header */
struct ipa_hw_pkt_status {
u32 status_opcode:8;
u32 exception:8;
u32 status_mask:16;
u32 pkt_len:16;
u32 endp_src_idx:5;
u32 reserved_1:3;
u32 endp_dest_idx:5;
u32 reserved_2:3;
u32 metadata:32;
union {
struct {
u32 filt_local:1;
u32 filt_global:1;
u32 filt_pipe_idx:5;
u32 filt_match:1;
u32 filt_rule_idx:6;
u32 ret_hdr:1;
u32 reserved_3:1;
u32 tag_f_1:16;
} ipa_hw_v2_0_pkt_status;
struct {
u32 filt_local:1;
u32 filt_global:1;
u32 filt_pipe_idx:5;
u32 ret_hdr:1;
u32 filt_rule_idx:8;
u32 tag_f_1:16;
} ipa_hw_v2_5_pkt_status;
};
u32 tag_f_2:32;
u32 time_day_ctr:32;
u32 nat_hit:1;
u32 nat_tbl_idx:13;
u32 nat_type:2;
u32 route_local:1;
u32 route_tbl_idx:5;
u32 route_match:1;
u32 ucp:1;
u32 route_rule_idx:8;
u32 hdr_local:1;
u32 hdr_offset:10;
u32 frag_hit:1;
u32 frag_rule:4;
u32 reserved_4:16;
};
#define IPA_PKT_STATUS_SIZE 32
/*! @brief Status header opcodes */
enum ipa_hw_status_opcode {
IPA_HW_STATUS_OPCODE_MIN,
IPA_HW_STATUS_OPCODE_PACKET = IPA_HW_STATUS_OPCODE_MIN,
IPA_HW_STATUS_OPCODE_NEW_FRAG_RULE,
IPA_HW_STATUS_OPCODE_DROPPED_PACKET,
IPA_HW_STATUS_OPCODE_SUSPENDED_PACKET,
IPA_HW_STATUS_OPCODE_XLAT_PACKET = 6,
IPA_HW_STATUS_OPCODE_MAX
};
/*! @brief Possible Masks received in status */
enum ipa_hw_pkt_status_mask {
IPA_HW_PKT_STATUS_MASK_FRAG_PROCESS = 0x1,
IPA_HW_PKT_STATUS_MASK_FILT_PROCESS = 0x2,
IPA_HW_PKT_STATUS_MASK_NAT_PROCESS = 0x4,
IPA_HW_PKT_STATUS_MASK_ROUTE_PROCESS = 0x8,
IPA_HW_PKT_STATUS_MASK_TAG_VALID = 0x10,
IPA_HW_PKT_STATUS_MASK_FRAGMENT = 0x20,
IPA_HW_PKT_STATUS_MASK_FIRST_FRAGMENT = 0x40,
IPA_HW_PKT_STATUS_MASK_V4 = 0x80,
IPA_HW_PKT_STATUS_MASK_CKSUM_PROCESS = 0x100,
IPA_HW_PKT_STATUS_MASK_AGGR_PROCESS = 0x200,
IPA_HW_PKT_STATUS_MASK_DEST_EOT = 0x400,
IPA_HW_PKT_STATUS_MASK_DEAGGR_PROCESS = 0x800,
IPA_HW_PKT_STATUS_MASK_DEAGG_FIRST = 0x1000,
IPA_HW_PKT_STATUS_MASK_SRC_EOT = 0x2000
};
/*! @brief Possible Exceptions received in status */
enum ipa_hw_pkt_status_exception {
IPA_HW_PKT_STATUS_EXCEPTION_NONE = 0x0,
IPA_HW_PKT_STATUS_EXCEPTION_DEAGGR = 0x1,
IPA_HW_PKT_STATUS_EXCEPTION_REPL = 0x2,
IPA_HW_PKT_STATUS_EXCEPTION_IPTYPE = 0x4,
IPA_HW_PKT_STATUS_EXCEPTION_IHL = 0x8,
IPA_HW_PKT_STATUS_EXCEPTION_FRAG_RULE_MISS = 0x10,
IPA_HW_PKT_STATUS_EXCEPTION_SW_FILT = 0x20,
IPA_HW_PKT_STATUS_EXCEPTION_NAT = 0x40,
IPA_HW_PKT_STATUS_EXCEPTION_ACTUAL_MAX,
IPA_HW_PKT_STATUS_EXCEPTION_MAX = 0xFF
};
/*! @brief IPA_HW_IMM_CMD_DMA_SHARED_MEM Immediate Command Parameters */
struct ipa_hw_imm_cmd_dma_shared_mem {
u32 reserved_1:16;
u32 size:16;
u32 system_addr:32;
u32 local_addr:16;
u32 direction:1;
u32 skip_pipeline_clear:1;
u32 reserved_2:14;
u32 padding:32;
};
#endif /* _IPA_HW_DEFS_H */

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,381 @@
/* Copyright (c) 2014-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/interrupt.h>
#include "ipa_i.h"
#define INTERRUPT_WORKQUEUE_NAME "ipa_interrupt_wq"
#define IPA_IRQ_NUM_MAX 32
struct ipa_interrupt_info {
ipa_irq_handler_t handler;
enum ipa_irq_type interrupt;
void *private_data;
bool deferred_flag;
};
struct ipa_interrupt_work_wrap {
struct work_struct interrupt_work;
ipa_irq_handler_t handler;
enum ipa_irq_type interrupt;
void *private_data;
void *interrupt_data;
};
static struct ipa_interrupt_info ipa_interrupt_to_cb[IPA_IRQ_NUM_MAX];
static struct workqueue_struct *ipa_interrupt_wq;
static u32 ipa_ee;
static void ipa_interrupt_defer(struct work_struct *work);
static DECLARE_WORK(ipa_interrupt_defer_work, ipa_interrupt_defer);
static int ipa2_irq_mapping[IPA_IRQ_MAX] = {
[IPA_BAD_SNOC_ACCESS_IRQ] = 0,
[IPA_EOT_COAL_IRQ] = 1,
[IPA_UC_IRQ_0] = 2,
[IPA_UC_IRQ_1] = 3,
[IPA_UC_IRQ_2] = 4,
[IPA_UC_IRQ_3] = 5,
[IPA_UC_IN_Q_NOT_EMPTY_IRQ] = 6,
[IPA_UC_RX_CMD_Q_NOT_FULL_IRQ] = 7,
[IPA_UC_TX_CMD_Q_NOT_FULL_IRQ] = 8,
[IPA_UC_TO_PROC_ACK_Q_NOT_FULL_IRQ] = 9,
[IPA_PROC_TO_UC_ACK_Q_NOT_EMPTY_IRQ] = 10,
[IPA_RX_ERR_IRQ] = 11,
[IPA_DEAGGR_ERR_IRQ] = 12,
[IPA_TX_ERR_IRQ] = 13,
[IPA_STEP_MODE_IRQ] = 14,
[IPA_PROC_ERR_IRQ] = 15,
[IPA_TX_SUSPEND_IRQ] = 16,
[IPA_TX_HOLB_DROP_IRQ] = 17,
[IPA_BAM_IDLE_IRQ] = 18,
};
static void deferred_interrupt_work(struct work_struct *work)
{
struct ipa_interrupt_work_wrap *work_data =
container_of(work,
struct ipa_interrupt_work_wrap,
interrupt_work);
IPADBG("call handler from workq...\n");
work_data->handler(work_data->interrupt, work_data->private_data,
work_data->interrupt_data);
kfree(work_data->interrupt_data);
kfree(work_data);
}
static bool is_valid_ep(u32 ep_suspend_data)
{
u32 bmsk = 1;
u32 i = 0;
for (i = 0; i < ipa_ctx->ipa_num_pipes; i++) {
if ((ep_suspend_data & bmsk) && (ipa_ctx->ep[i].valid))
return true;
bmsk = bmsk << 1;
}
return false;
}
static int handle_interrupt(int irq_num, bool isr_context)
{
struct ipa_interrupt_info interrupt_info;
struct ipa_interrupt_work_wrap *work_data;
u32 suspend_data;
void *interrupt_data = NULL;
struct ipa_tx_suspend_irq_data *suspend_interrupt_data = NULL;
int res;
interrupt_info = ipa_interrupt_to_cb[irq_num];
if (interrupt_info.handler == NULL) {
IPAERR("A callback function wasn't set for interrupt num %d\n",
irq_num);
return -EINVAL;
}
switch (interrupt_info.interrupt) {
case IPA_TX_SUSPEND_IRQ:
suspend_data = ipa_read_reg(ipa_ctx->mmio,
IPA_IRQ_SUSPEND_INFO_EE_n_ADDR(ipa_ee));
if (!is_valid_ep(suspend_data))
return 0;
suspend_interrupt_data =
kzalloc(sizeof(*suspend_interrupt_data), GFP_ATOMIC);
if (!suspend_interrupt_data) {
IPAERR("failed allocating suspend_interrupt_data\n");
return -ENOMEM;
}
suspend_interrupt_data->endpoints = suspend_data;
interrupt_data = suspend_interrupt_data;
break;
default:
break;
}
/* Force defer processing if in ISR context. */
if (interrupt_info.deferred_flag || isr_context) {
work_data = kzalloc(sizeof(struct ipa_interrupt_work_wrap),
GFP_ATOMIC);
if (!work_data) {
IPAERR("failed allocating ipa_interrupt_work_wrap\n");
res = -ENOMEM;
goto fail_alloc_work;
}
INIT_WORK(&work_data->interrupt_work, deferred_interrupt_work);
work_data->handler = interrupt_info.handler;
work_data->interrupt = interrupt_info.interrupt;
work_data->private_data = interrupt_info.private_data;
work_data->interrupt_data = interrupt_data;
queue_work(ipa_interrupt_wq, &work_data->interrupt_work);
} else {
interrupt_info.handler(interrupt_info.interrupt,
interrupt_info.private_data,
interrupt_data);
kfree(interrupt_data);
}
return 0;
fail_alloc_work:
kfree(interrupt_data);
return res;
}
static inline bool is_uc_irq(int irq_num)
{
if (ipa_interrupt_to_cb[irq_num].interrupt >= IPA_UC_IRQ_0 &&
ipa_interrupt_to_cb[irq_num].interrupt <= IPA_UC_IRQ_3)
return true;
else
return false;
}
static void ipa_process_interrupts(bool isr_context)
{
u32 reg;
u32 bmsk;
u32 i = 0;
u32 en;
bool uc_irq;
en = ipa_read_reg(ipa_ctx->mmio, IPA_IRQ_EN_EE_n_ADDR(ipa_ee));
reg = ipa_read_reg(ipa_ctx->mmio, IPA_IRQ_STTS_EE_n_ADDR(ipa_ee));
while (en & reg) {
bmsk = 1;
for (i = 0; i < IPA_IRQ_NUM_MAX; i++) {
if (!(en & reg & bmsk)) {
bmsk = bmsk << 1;
continue;
}
uc_irq = is_uc_irq(i);
/*
* Clear uC interrupt before processing to avoid
* clearing unhandled interrupts
*/
if (uc_irq)
ipa_write_reg(ipa_ctx->mmio,
IPA_IRQ_CLR_EE_n_ADDR(ipa_ee), bmsk);
/* Process the interrupts */
handle_interrupt(i, isr_context);
/*
* Clear non uC interrupt after processing
* to avoid clearing interrupt data
*/
if (!uc_irq)
ipa_write_reg(ipa_ctx->mmio,
IPA_IRQ_CLR_EE_n_ADDR(ipa_ee), bmsk);
bmsk = bmsk << 1;
}
/*
* Check pending interrupts that may have
* been raised since last read
*/
reg = ipa_read_reg(ipa_ctx->mmio,
IPA_IRQ_STTS_EE_n_ADDR(ipa_ee));
}
}
static void ipa_interrupt_defer(struct work_struct *work)
{
IPADBG("processing interrupts in wq\n");
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
ipa_process_interrupts(false);
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
IPADBG("Done\n");
}
static irqreturn_t ipa_isr(int irq, void *ctxt)
{
unsigned long flags;
/* defer interrupt handling in case IPA is not clocked on */
if (ipa_active_clients_trylock(&flags) == 0) {
IPADBG("defer interrupt processing\n");
queue_work(ipa_ctx->power_mgmt_wq, &ipa_interrupt_defer_work);
return IRQ_HANDLED;
}
if (ipa_ctx->ipa_active_clients.cnt == 0) {
IPADBG("defer interrupt processing\n");
queue_work(ipa_ctx->power_mgmt_wq, &ipa_interrupt_defer_work);
goto bail;
}
ipa_process_interrupts(true);
bail:
ipa_active_clients_trylock_unlock(&flags);
return IRQ_HANDLED;
}
/**
* ipa2_add_interrupt_handler() - Adds handler to an interrupt type
* @interrupt: Interrupt type
* @handler: The handler to be added
* @deferred_flag: whether the handler processing should be deferred in
* a workqueue
* @private_data: the client's private data
*
* Adds handler to an interrupt type and enable the specific bit
* in IRQ_EN register, associated interrupt in IRQ_STTS register will be enabled
*/
int ipa2_add_interrupt_handler(enum ipa_irq_type interrupt,
ipa_irq_handler_t handler,
bool deferred_flag,
void *private_data)
{
u32 val;
u32 bmsk;
int irq_num;
IPADBG("in ipa2_add_interrupt_handler\n");
if (interrupt < IPA_BAD_SNOC_ACCESS_IRQ ||
interrupt >= IPA_IRQ_MAX) {
IPAERR("invalid interrupt number %d\n", interrupt);
return -EINVAL;
}
irq_num = ipa2_irq_mapping[interrupt];
if (irq_num < 0 || irq_num >= IPA_IRQ_NUM_MAX) {
IPAERR("interrupt %d not supported\n", interrupt);
WARN_ON(1);
return -EFAULT;
}
ipa_interrupt_to_cb[irq_num].deferred_flag = deferred_flag;
ipa_interrupt_to_cb[irq_num].handler = handler;
ipa_interrupt_to_cb[irq_num].private_data = private_data;
ipa_interrupt_to_cb[irq_num].interrupt = interrupt;
val = ipa_read_reg(ipa_ctx->mmio, IPA_IRQ_EN_EE_n_ADDR(ipa_ee));
IPADBG("read IPA_IRQ_EN_EE_n_ADDR register. reg = %d\n", val);
bmsk = 1 << irq_num;
val |= bmsk;
ipa_write_reg(ipa_ctx->mmio, IPA_IRQ_EN_EE_n_ADDR(ipa_ee), val);
IPADBG("wrote IPA_IRQ_EN_EE_n_ADDR register. reg = %d\n", val);
return 0;
}
/**
* ipa2_remove_interrupt_handler() - Removes handler to an interrupt type
* @interrupt: Interrupt type
*
* Removes the handler and disable the specific bit in IRQ_EN register
*/
int ipa2_remove_interrupt_handler(enum ipa_irq_type interrupt)
{
u32 val;
u32 bmsk;
int irq_num;
if (interrupt < IPA_BAD_SNOC_ACCESS_IRQ ||
interrupt >= IPA_IRQ_MAX) {
IPAERR("invalid interrupt number %d\n", interrupt);
return -EINVAL;
}
irq_num = ipa2_irq_mapping[interrupt];
if (irq_num < 0 || irq_num >= IPA_IRQ_NUM_MAX) {
IPAERR("interrupt %d not supported\n", interrupt);
WARN_ON(1);
return -EFAULT;
}
kfree(ipa_interrupt_to_cb[irq_num].private_data);
ipa_interrupt_to_cb[irq_num].deferred_flag = false;
ipa_interrupt_to_cb[irq_num].handler = NULL;
ipa_interrupt_to_cb[irq_num].private_data = NULL;
ipa_interrupt_to_cb[irq_num].interrupt = -1;
val = ipa_read_reg(ipa_ctx->mmio, IPA_IRQ_EN_EE_n_ADDR(ipa_ee));
bmsk = 1 << irq_num;
val &= ~bmsk;
ipa_write_reg(ipa_ctx->mmio, IPA_IRQ_EN_EE_n_ADDR(ipa_ee), val);
return 0;
}
/**
* ipa_interrupts_init() - Initialize the IPA interrupts framework
* @ipa_irq: The interrupt number to allocate
* @ee: Execution environment
* @ipa_dev: The basic device structure representing the IPA driver
*
* - Initialize the ipa_interrupt_to_cb array
* - Clear interrupts status
* - Register the ipa interrupt handler - ipa_isr
* - Enable apps processor wakeup by IPA interrupts
*/
int ipa_interrupts_init(u32 ipa_irq, u32 ee, struct device *ipa_dev)
{
int idx;
u32 reg = 0xFFFFFFFF;
int res = 0;
ipa_ee = ee;
for (idx = 0; idx < IPA_IRQ_NUM_MAX; idx++) {
ipa_interrupt_to_cb[idx].deferred_flag = false;
ipa_interrupt_to_cb[idx].handler = NULL;
ipa_interrupt_to_cb[idx].private_data = NULL;
ipa_interrupt_to_cb[idx].interrupt = -1;
}
ipa_interrupt_wq = create_singlethread_workqueue(
INTERRUPT_WORKQUEUE_NAME);
if (!ipa_interrupt_wq) {
IPAERR("workqueue creation failed\n");
return -ENOMEM;
}
/*Clearing interrupts status*/
ipa_write_reg(ipa_ctx->mmio, IPA_IRQ_CLR_EE_n_ADDR(ipa_ee), reg);
res = request_irq(ipa_irq, (irq_handler_t) ipa_isr,
IRQF_TRIGGER_RISING, "ipa", ipa_dev);
if (res) {
IPAERR("fail to register IPA IRQ handler irq=%d\n", ipa_irq);
return -ENODEV;
}
IPADBG("IPA IRQ handler irq=%d registered\n", ipa_irq);
res = enable_irq_wake(ipa_irq);
if (res)
IPAERR("fail to enable IPA IRQ wakeup irq=%d res=%d\n",
ipa_irq, res);
else
IPADBG("IPA IRQ wakeup enabled irq=%d\n", ipa_irq);
return 0;
}

View File

@@ -0,0 +1,607 @@
/* Copyright (c) 2013-2015, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/fs.h>
#include <linux/sched.h>
#include "ipa_i.h"
struct ipa_intf {
char name[IPA_RESOURCE_NAME_MAX];
struct list_head link;
u32 num_tx_props;
u32 num_rx_props;
u32 num_ext_props;
struct ipa_ioc_tx_intf_prop *tx;
struct ipa_ioc_rx_intf_prop *rx;
struct ipa_ioc_ext_intf_prop *ext;
enum ipa_client_type excp_pipe;
};
struct ipa_push_msg {
struct ipa_msg_meta meta;
ipa_msg_free_fn callback;
void *buff;
struct list_head link;
};
struct ipa_pull_msg {
struct ipa_msg_meta meta;
ipa_msg_pull_fn callback;
struct list_head link;
};
/**
* ipa2_register_intf() - register "logical" interface
* @name: [in] interface name
* @tx: [in] TX properties of the interface
* @rx: [in] RX properties of the interface
*
* Register an interface and its tx and rx properties, this allows
* configuration of rules from user-space
*
* Returns: 0 on success, negative on failure
*
* Note: Should not be called from atomic context
*/
int ipa2_register_intf(const char *name, const struct ipa_tx_intf *tx,
const struct ipa_rx_intf *rx)
{
if (unlikely(!ipa_ctx)) {
IPAERR("IPA driver was not initialized\n");
return -EINVAL;
}
return ipa2_register_intf_ext(name, tx, rx, NULL);
}
/**
* ipa2_register_intf_ext() - register "logical" interface which has only
* extended properties
* @name: [in] interface name
* @tx: [in] TX properties of the interface
* @rx: [in] RX properties of the interface
* @ext: [in] EXT properties of the interface
*
* Register an interface and its tx, rx and ext properties, this allows
* configuration of rules from user-space
*
* Returns: 0 on success, negative on failure
*
* Note: Should not be called from atomic context
*/
int ipa2_register_intf_ext(const char *name, const struct ipa_tx_intf *tx,
const struct ipa_rx_intf *rx,
const struct ipa_ext_intf *ext)
{
struct ipa_intf *intf;
u32 len;
if (name == NULL || (tx == NULL && rx == NULL && ext == NULL)) {
IPAERR("invalid params name=%p tx=%p rx=%p ext=%p\n", name,
tx, rx, ext);
return -EINVAL;
}
if (tx && tx->num_props > IPA_NUM_PROPS_MAX) {
IPAERR("invalid tx num_props=%d max=%d\n", tx->num_props,
IPA_NUM_PROPS_MAX);
return -EINVAL;
}
if (rx && rx->num_props > IPA_NUM_PROPS_MAX) {
IPAERR("invalid rx num_props=%d max=%d\n", rx->num_props,
IPA_NUM_PROPS_MAX);
return -EINVAL;
}
if (ext && ext->num_props > IPA_NUM_PROPS_MAX) {
IPAERR("invalid ext num_props=%d max=%d\n", ext->num_props,
IPA_NUM_PROPS_MAX);
return -EINVAL;
}
len = sizeof(struct ipa_intf);
intf = kzalloc(len, GFP_KERNEL);
if (intf == NULL) {
IPAERR("fail to alloc 0x%x bytes\n", len);
return -ENOMEM;
}
strlcpy(intf->name, name, IPA_RESOURCE_NAME_MAX);
if (tx) {
intf->num_tx_props = tx->num_props;
len = tx->num_props * sizeof(struct ipa_ioc_tx_intf_prop);
intf->tx = kzalloc(len, GFP_KERNEL);
if (intf->tx == NULL) {
IPAERR("fail to alloc 0x%x bytes\n", len);
kfree(intf);
return -ENOMEM;
}
memcpy(intf->tx, tx->prop, len);
}
if (rx) {
intf->num_rx_props = rx->num_props;
len = rx->num_props * sizeof(struct ipa_ioc_rx_intf_prop);
intf->rx = kzalloc(len, GFP_KERNEL);
if (intf->rx == NULL) {
IPAERR("fail to alloc 0x%x bytes\n", len);
kfree(intf->tx);
kfree(intf);
return -ENOMEM;
}
memcpy(intf->rx, rx->prop, len);
}
if (ext) {
intf->num_ext_props = ext->num_props;
len = ext->num_props * sizeof(struct ipa_ioc_ext_intf_prop);
intf->ext = kzalloc(len, GFP_KERNEL);
if (intf->ext == NULL) {
IPAERR("fail to alloc 0x%x bytes\n", len);
kfree(intf->rx);
kfree(intf->tx);
kfree(intf);
return -ENOMEM;
}
memcpy(intf->ext, ext->prop, len);
}
if (ext && ext->excp_pipe_valid)
intf->excp_pipe = ext->excp_pipe;
else
intf->excp_pipe = IPA_CLIENT_APPS_LAN_CONS;
mutex_lock(&ipa_ctx->lock);
list_add_tail(&intf->link, &ipa_ctx->intf_list);
mutex_unlock(&ipa_ctx->lock);
return 0;
}
/**
* ipa2_deregister_intf() - de-register previously registered logical interface
* @name: [in] interface name
*
* De-register a previously registered interface
*
* Returns: 0 on success, negative on failure
*
* Note: Should not be called from atomic context
*/
int ipa2_deregister_intf(const char *name)
{
struct ipa_intf *entry;
struct ipa_intf *next;
int result = -EINVAL;
if (unlikely(!ipa_ctx)) {
IPAERR("IPA driver was not initialized\n");
return -EINVAL;
}
if (name == NULL) {
IPAERR("invalid param name=%p\n", name);
return result;
}
mutex_lock(&ipa_ctx->lock);
list_for_each_entry_safe(entry, next, &ipa_ctx->intf_list, link) {
if (!strcmp(entry->name, name)) {
list_del(&entry->link);
kfree(entry->ext);
kfree(entry->rx);
kfree(entry->tx);
kfree(entry);
result = 0;
break;
}
}
mutex_unlock(&ipa_ctx->lock);
return result;
}
/**
* ipa_query_intf() - query logical interface properties
* @lookup: [inout] interface name and number of properties
*
* Obtain the handle and number of tx and rx properties for the named
* interface, used as part of querying the tx and rx properties for
* configuration of various rules from user-space
*
* Returns: 0 on success, negative on failure
*
* Note: Should not be called from atomic context
*/
int ipa_query_intf(struct ipa_ioc_query_intf *lookup)
{
struct ipa_intf *entry;
int result = -EINVAL;
if (lookup == NULL) {
IPAERR("invalid param lookup=%p\n", lookup);
return result;
}
mutex_lock(&ipa_ctx->lock);
list_for_each_entry(entry, &ipa_ctx->intf_list, link) {
if (!strcmp(entry->name, lookup->name)) {
lookup->num_tx_props = entry->num_tx_props;
lookup->num_rx_props = entry->num_rx_props;
lookup->num_ext_props = entry->num_ext_props;
lookup->excp_pipe = entry->excp_pipe;
result = 0;
break;
}
}
mutex_unlock(&ipa_ctx->lock);
return result;
}
/**
* ipa_query_intf_tx_props() - qeury TX props of an interface
* @tx: [inout] interface tx attributes
*
* Obtain the tx properties for the specified interface
*
* Returns: 0 on success, negative on failure
*
* Note: Should not be called from atomic context
*/
int ipa_query_intf_tx_props(struct ipa_ioc_query_intf_tx_props *tx)
{
struct ipa_intf *entry;
int result = -EINVAL;
if (tx == NULL) {
IPAERR("invalid param tx=%p\n", tx);
return result;
}
mutex_lock(&ipa_ctx->lock);
list_for_each_entry(entry, &ipa_ctx->intf_list, link) {
if (!strcmp(entry->name, tx->name)) {
memcpy(tx->tx, entry->tx, entry->num_tx_props *
sizeof(struct ipa_ioc_tx_intf_prop));
result = 0;
break;
}
}
mutex_unlock(&ipa_ctx->lock);
return result;
}
/**
* ipa_query_intf_rx_props() - qeury RX props of an interface
* @rx: [inout] interface rx attributes
*
* Obtain the rx properties for the specified interface
*
* Returns: 0 on success, negative on failure
*
* Note: Should not be called from atomic context
*/
int ipa_query_intf_rx_props(struct ipa_ioc_query_intf_rx_props *rx)
{
struct ipa_intf *entry;
int result = -EINVAL;
if (rx == NULL) {
IPAERR("invalid param rx=%p\n", rx);
return result;
}
mutex_lock(&ipa_ctx->lock);
list_for_each_entry(entry, &ipa_ctx->intf_list, link) {
if (!strcmp(entry->name, rx->name)) {
memcpy(rx->rx, entry->rx, entry->num_rx_props *
sizeof(struct ipa_ioc_rx_intf_prop));
result = 0;
break;
}
}
mutex_unlock(&ipa_ctx->lock);
return result;
}
/**
* ipa_query_intf_ext_props() - qeury EXT props of an interface
* @ext: [inout] interface ext attributes
*
* Obtain the ext properties for the specified interface
*
* Returns: 0 on success, negative on failure
*
* Note: Should not be called from atomic context
*/
int ipa_query_intf_ext_props(struct ipa_ioc_query_intf_ext_props *ext)
{
struct ipa_intf *entry;
int result = -EINVAL;
if (ext == NULL) {
IPAERR("invalid param ext=%p\n", ext);
return result;
}
mutex_lock(&ipa_ctx->lock);
list_for_each_entry(entry, &ipa_ctx->intf_list, link) {
if (!strcmp(entry->name, ext->name)) {
memcpy(ext->ext, entry->ext, entry->num_ext_props *
sizeof(struct ipa_ioc_ext_intf_prop));
result = 0;
break;
}
}
mutex_unlock(&ipa_ctx->lock);
return result;
}
/**
* ipa2_send_msg() - Send "message" from kernel client to IPA driver
* @meta: [in] message meta-data
* @buff: [in] the payload for message
* @callback: [in] free callback
*
* Client supplies the message meta-data and payload which IPA driver buffers
* till read by user-space. After read from user space IPA driver invokes the
* callback supplied to free the message payload. Client must not touch/free
* the message payload after calling this API.
*
* Returns: 0 on success, negative on failure
*
* Note: Should not be called from atomic context
*/
int ipa2_send_msg(struct ipa_msg_meta *meta, void *buff,
ipa_msg_free_fn callback)
{
struct ipa_push_msg *msg;
if (unlikely(!ipa_ctx)) {
IPAERR("IPA driver was not initialized\n");
return -EINVAL;
}
if (meta == NULL || (buff == NULL && callback != NULL) ||
(buff != NULL && callback == NULL)) {
IPAERR("invalid param meta=%p buff=%p, callback=%p\n",
meta, buff, callback);
return -EINVAL;
}
if (meta->msg_type >= IPA_EVENT_MAX_NUM) {
IPAERR("unsupported message type %d\n", meta->msg_type);
return -EINVAL;
}
msg = kzalloc(sizeof(struct ipa_push_msg), GFP_KERNEL);
if (msg == NULL) {
IPAERR("fail to alloc ipa_msg container\n");
return -ENOMEM;
}
msg->meta = *meta;
msg->buff = buff;
msg->callback = callback;
mutex_lock(&ipa_ctx->msg_lock);
list_add_tail(&msg->link, &ipa_ctx->msg_list);
mutex_unlock(&ipa_ctx->msg_lock);
IPA_STATS_INC_CNT(ipa_ctx->stats.msg_w[meta->msg_type]);
wake_up(&ipa_ctx->msg_waitq);
return 0;
}
/**
* ipa2_register_pull_msg() - register pull message type
* @meta: [in] message meta-data
* @callback: [in] pull callback
*
* Register message callback by kernel client with IPA driver for IPA driver to
* pull message on-demand.
*
* Returns: 0 on success, negative on failure
*
* Note: Should not be called from atomic context
*/
int ipa2_register_pull_msg(struct ipa_msg_meta *meta, ipa_msg_pull_fn callback)
{
struct ipa_pull_msg *msg;
if (meta == NULL || callback == NULL) {
IPAERR("invalid param meta=%p callback=%p\n", meta, callback);
return -EINVAL;
}
msg = kzalloc(sizeof(struct ipa_pull_msg), GFP_KERNEL);
if (msg == NULL) {
IPAERR("fail to alloc ipa_msg container\n");
return -ENOMEM;
}
msg->meta = *meta;
msg->callback = callback;
mutex_lock(&ipa_ctx->msg_lock);
list_add_tail(&msg->link, &ipa_ctx->pull_msg_list);
mutex_unlock(&ipa_ctx->msg_lock);
return 0;
}
/**
* ipa2_deregister_pull_msg() - De-register pull message type
* @meta: [in] message meta-data
*
* De-register "message" by kernel client from IPA driver
*
* Returns: 0 on success, negative on failure
*
* Note: Should not be called from atomic context
*/
int ipa2_deregister_pull_msg(struct ipa_msg_meta *meta)
{
struct ipa_pull_msg *entry;
struct ipa_pull_msg *next;
int result = -EINVAL;
if (meta == NULL) {
IPAERR("invalid param name=%p\n", meta);
return result;
}
mutex_lock(&ipa_ctx->msg_lock);
list_for_each_entry_safe(entry, next, &ipa_ctx->pull_msg_list, link) {
if (entry->meta.msg_len == meta->msg_len &&
entry->meta.msg_type == meta->msg_type) {
list_del(&entry->link);
kfree(entry);
result = 0;
break;
}
}
mutex_unlock(&ipa_ctx->msg_lock);
return result;
}
/**
* ipa_read() - read message from IPA device
* @filp: [in] file pointer
* @buf: [out] buffer to read into
* @count: [in] size of above buffer
* @f_pos: [inout] file position
*
* Uer-space should continually read from /dev/ipa, read wll block when there
* are no messages to read. Upon return, user-space should read the ipa_msg_meta
* from the start of the buffer to know what type of message was read and its
* length in the remainder of the buffer. Buffer supplied must be big enough to
* hold the message meta-data and the largest defined message type
*
* Returns: how many bytes copied to buffer
*
* Note: Should not be called from atomic context
*/
ssize_t ipa_read(struct file *filp, char __user *buf, size_t count,
loff_t *f_pos)
{
char __user *start;
struct ipa_push_msg *msg = NULL;
int ret;
DEFINE_WAIT(wait);
int locked;
start = buf;
while (1) {
prepare_to_wait(&ipa_ctx->msg_waitq, &wait, TASK_INTERRUPTIBLE);
mutex_lock(&ipa_ctx->msg_lock);
locked = 1;
if (!list_empty(&ipa_ctx->msg_list)) {
msg = list_first_entry(&ipa_ctx->msg_list,
struct ipa_push_msg, link);
list_del(&msg->link);
}
IPADBG("msg=%p\n", msg);
if (msg) {
locked = 0;
mutex_unlock(&ipa_ctx->msg_lock);
if (copy_to_user(buf, &msg->meta,
sizeof(struct ipa_msg_meta))) {
ret = -EFAULT;
break;
}
buf += sizeof(struct ipa_msg_meta);
count -= sizeof(struct ipa_msg_meta);
if (msg->buff) {
if (copy_to_user(buf, msg->buff,
msg->meta.msg_len)) {
ret = -EFAULT;
break;
}
buf += msg->meta.msg_len;
count -= msg->meta.msg_len;
msg->callback(msg->buff, msg->meta.msg_len,
msg->meta.msg_type);
}
IPA_STATS_INC_CNT(
ipa_ctx->stats.msg_r[msg->meta.msg_type]);
kfree(msg);
}
ret = -EAGAIN;
if (filp->f_flags & O_NONBLOCK)
break;
ret = -EINTR;
if (signal_pending(current))
break;
if (start != buf)
break;
locked = 0;
mutex_unlock(&ipa_ctx->msg_lock);
schedule();
}
finish_wait(&ipa_ctx->msg_waitq, &wait);
if (start != buf && ret != -EFAULT)
ret = buf - start;
if (locked)
mutex_unlock(&ipa_ctx->msg_lock);
return ret;
}
/**
* ipa_pull_msg() - pull the specified message from client
* @meta: [in] message meta-data
* @buf: [out] buffer to read into
* @count: [in] size of above buffer
*
* Populate the supplied buffer with the pull message which is fetched
* from client, the message must have previously been registered with
* the IPA driver
*
* Returns: how many bytes copied to buffer
*
* Note: Should not be called from atomic context
*/
int ipa_pull_msg(struct ipa_msg_meta *meta, char *buff, size_t count)
{
struct ipa_pull_msg *entry;
int result = -EINVAL;
if (meta == NULL || buff == NULL || !count) {
IPAERR("invalid param name=%p buff=%p count=%zu\n",
meta, buff, count);
return result;
}
mutex_lock(&ipa_ctx->msg_lock);
list_for_each_entry(entry, &ipa_ctx->pull_msg_list, link) {
if (entry->meta.msg_len == meta->msg_len &&
entry->meta.msg_type == meta->msg_type) {
result = entry->callback(buff, count, meta->msg_type);
break;
}
}
mutex_unlock(&ipa_ctx->msg_lock);
return result;
}

View File

@@ -0,0 +1,319 @@
/* Copyright (c) 2015-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/debugfs.h>
#include <linux/export.h>
#include <linux/delay.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/ipa.h>
#include <linux/ipa_mhi.h>
#include "ipa_i.h"
#include "ipa_qmi_service.h"
#define IPA_MHI_DRV_NAME
#define IPA_MHI_DBG(fmt, args...) \
pr_debug(IPA_MHI_DRV_NAME " %s:%d " fmt, \
__func__, __LINE__, ## args)
#define IPA_MHI_ERR(fmt, args...) \
pr_err(IPA_MHI_DRV_NAME " %s:%d " fmt, __func__, __LINE__, ## args)
#define IPA_MHI_FUNC_ENTRY() \
IPA_MHI_DBG("ENTRY\n")
#define IPA_MHI_FUNC_EXIT() \
IPA_MHI_DBG("EXIT\n")
bool ipa2_mhi_sps_channel_empty(enum ipa_client_type client)
{
u32 pipe_idx;
bool pending;
pipe_idx = ipa2_get_ep_mapping(client);
if (sps_pipe_pending_desc(ipa_ctx->bam_handle,
pipe_idx, &pending)) {
IPA_MHI_ERR("sps_pipe_pending_desc failed\n");
WARN_ON(1);
return false;
}
return !pending;
}
int ipa2_disable_sps_pipe(enum ipa_client_type client)
{
int ipa_ep_index;
int res;
ipa_ep_index = ipa2_get_ep_mapping(client);
res = sps_pipe_disable(ipa_ctx->bam_handle, ipa_ep_index);
if (res) {
IPA_MHI_ERR("sps_pipe_disable fail %d\n", res);
return res;
}
return 0;
}
int ipa2_mhi_reset_channel_internal(enum ipa_client_type client)
{
int res;
IPA_MHI_FUNC_ENTRY();
res = ipa_disable_data_path(ipa2_get_ep_mapping(client));
if (res) {
IPA_MHI_ERR("ipa_disable_data_path failed %d\n", res);
return res;
}
IPA_MHI_FUNC_EXIT();
return 0;
}
int ipa2_mhi_start_channel_internal(enum ipa_client_type client)
{
int res;
IPA_MHI_FUNC_ENTRY();
res = ipa_enable_data_path(ipa2_get_ep_mapping(client));
if (res) {
IPA_MHI_ERR("ipa_enable_data_path failed %d\n", res);
return res;
}
IPA_MHI_FUNC_EXIT();
return 0;
}
int ipa2_mhi_init_engine(struct ipa_mhi_init_engine *params)
{
int res;
IPA_MHI_FUNC_ENTRY();
if (!params) {
IPA_MHI_ERR("null args\n");
return -EINVAL;
}
if (ipa2_uc_state_check()) {
IPA_MHI_ERR("IPA uc is not loaded\n");
return -EAGAIN;
}
/* Initialize IPA MHI engine */
res = ipa_uc_mhi_init_engine(params->uC.msi, params->uC.mmio_addr,
params->uC.host_ctrl_addr, params->uC.host_data_addr,
params->uC.first_ch_idx, params->uC.first_er_idx);
if (res) {
IPA_MHI_ERR("failed to start MHI engine %d\n", res);
goto fail_init_engine;
}
/* Update UL/DL sync if valid */
res = ipa2_uc_mhi_send_dl_ul_sync_info(
params->uC.ipa_cached_dl_ul_sync_info);
if (res) {
IPA_MHI_ERR("failed to update ul/dl sync %d\n", res);
goto fail_init_engine;
}
IPA_MHI_FUNC_EXIT();
return 0;
fail_init_engine:
return res;
}
/**
* ipa2_connect_mhi_pipe() - Connect pipe to IPA and start corresponding
* MHI channel
* @in: connect parameters
* @clnt_hdl: [out] client handle for this pipe
*
* This function is called by IPA MHI client driver on MHI channel start.
* This function is called after MHI engine was started.
* This function is doing the following:
* - Send command to uC to start corresponding MHI channel
* - Configure IPA EP control
*
* Return codes: 0 : success
* negative : error
*/
int ipa2_connect_mhi_pipe(struct ipa_mhi_connect_params_internal *in,
u32 *clnt_hdl)
{
struct ipa_ep_context *ep;
int ipa_ep_idx;
int res;
IPA_MHI_FUNC_ENTRY();
if (!in || !clnt_hdl) {
IPA_MHI_ERR("NULL args\n");
return -EINVAL;
}
if (in->sys->client >= IPA_CLIENT_MAX) {
IPA_MHI_ERR("bad parm client:%d\n", in->sys->client);
return -EINVAL;
}
ipa_ep_idx = ipa2_get_ep_mapping(in->sys->client);
if (ipa_ep_idx == -1) {
IPA_MHI_ERR("Invalid client.\n");
return -EINVAL;
}
ep = &ipa_ctx->ep[ipa_ep_idx];
IPA_MHI_DBG("client %d channelHandle %d channelIndex %d\n",
in->sys->client, in->start.uC.index, in->start.uC.id);
if (ep->valid == 1) {
IPA_MHI_ERR("EP already allocated.\n");
goto fail_ep_exists;
}
memset(ep, 0, offsetof(struct ipa_ep_context, sys));
ep->valid = 1;
ep->skip_ep_cfg = in->sys->skip_ep_cfg;
ep->client = in->sys->client;
ep->client_notify = in->sys->notify;
ep->priv = in->sys->priv;
ep->keep_ipa_awake = in->sys->keep_ipa_awake;
/* start channel in uC */
if (in->start.uC.state == IPA_HW_MHI_CHANNEL_STATE_INVALID) {
IPA_MHI_DBG("Initializing channel\n");
res = ipa_uc_mhi_init_channel(ipa_ep_idx, in->start.uC.index,
in->start.uC.id,
(IPA_CLIENT_IS_PROD(ep->client) ? 1 : 2));
if (res) {
IPA_MHI_ERR("init_channel failed %d\n", res);
goto fail_init_channel;
}
} else if (in->start.uC.state == IPA_HW_MHI_CHANNEL_STATE_DISABLE) {
IPA_MHI_DBG("Starting channel\n");
res = ipa_uc_mhi_resume_channel(in->start.uC.index, false);
if (res) {
IPA_MHI_ERR("init_channel failed %d\n", res);
goto fail_init_channel;
}
} else {
IPA_MHI_ERR("Invalid channel state %d\n", in->start.uC.state);
goto fail_init_channel;
}
res = ipa_enable_data_path(ipa_ep_idx);
if (res) {
IPA_MHI_ERR("enable data path failed res=%d clnt=%d.\n", res,
ipa_ep_idx);
goto fail_enable_dp;
}
if (!ep->skip_ep_cfg) {
if (ipa2_cfg_ep(ipa_ep_idx, &in->sys->ipa_ep_cfg)) {
IPAERR("fail to configure EP.\n");
goto fail_ep_cfg;
}
if (ipa2_cfg_ep_status(ipa_ep_idx, &ep->status)) {
IPAERR("fail to configure status of EP.\n");
goto fail_ep_cfg;
}
IPA_MHI_DBG("ep configuration successful\n");
} else {
IPA_MHI_DBG("skipping ep configuration\n");
}
*clnt_hdl = ipa_ep_idx;
if (!ep->skip_ep_cfg && IPA_CLIENT_IS_PROD(in->sys->client))
ipa_install_dflt_flt_rules(ipa_ep_idx);
ipa_ctx->skip_ep_cfg_shadow[ipa_ep_idx] = ep->skip_ep_cfg;
IPA_MHI_DBG("client %d (ep: %d) connected\n", in->sys->client,
ipa_ep_idx);
IPA_MHI_FUNC_EXIT();
return 0;
fail_ep_cfg:
ipa_disable_data_path(ipa_ep_idx);
fail_enable_dp:
ipa_uc_mhi_reset_channel(in->start.uC.index);
fail_init_channel:
memset(ep, 0, offsetof(struct ipa_ep_context, sys));
fail_ep_exists:
return -EPERM;
}
/**
* ipa2_disconnect_mhi_pipe() - Disconnect pipe from IPA and reset corresponding
* MHI channel
* @in: connect parameters
* @clnt_hdl: [out] client handle for this pipe
*
* This function is called by IPA MHI client driver on MHI channel reset.
* This function is called after MHI channel was started.
* This function is doing the following:
* - Send command to uC to reset corresponding MHI channel
* - Configure IPA EP control
*
* Return codes: 0 : success
* negative : error
*/
int ipa2_disconnect_mhi_pipe(u32 clnt_hdl)
{
IPA_MHI_FUNC_ENTRY();
if (clnt_hdl >= ipa_ctx->ipa_num_pipes) {
IPAERR("invalid handle %d\n", clnt_hdl);
return -EINVAL;
}
if (ipa_ctx->ep[clnt_hdl].valid == 0) {
IPAERR("pipe was not connected %d\n", clnt_hdl);
return -EINVAL;
}
ipa_ctx->ep[clnt_hdl].valid = 0;
ipa_delete_dflt_flt_rules(clnt_hdl);
IPA_MHI_DBG("client (ep: %d) disconnected\n", clnt_hdl);
IPA_MHI_FUNC_EXIT();
return 0;
}
int ipa2_mhi_resume_channels_internal(enum ipa_client_type client,
bool LPTransitionRejected, bool brstmode_enabled,
union __packed gsi_channel_scratch ch_scratch, u8 index)
{
int res;
IPA_MHI_FUNC_ENTRY();
res = ipa_uc_mhi_resume_channel(index, LPTransitionRejected);
if (res) {
IPA_MHI_ERR("failed to suspend channel %u error %d\n",
index, res);
return res;
}
IPA_MHI_FUNC_EXIT();
return 0;
}
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("IPA MHI driver");

View File

@@ -0,0 +1,769 @@
/* Copyright (c) 2012-2015, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/device.h>
#include <linux/fs.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/uaccess.h>
#include "ipa_i.h"
#define IPA_NAT_PHYS_MEM_OFFSET 0
#define IPA_NAT_PHYS_MEM_SIZE IPA_RAM_NAT_SIZE
#define IPA_NAT_SYSTEM_MEMORY 0
#define IPA_NAT_SHARED_MEMORY 1
#define IPA_NAT_TEMP_MEM_SIZE 128
static int ipa_nat_vma_fault_remap(
struct vm_area_struct *vma, struct vm_fault *vmf)
{
IPADBG("\n");
vmf->page = NULL;
return VM_FAULT_SIGBUS;
}
/* VMA related file operations functions */
static struct vm_operations_struct ipa_nat_remap_vm_ops = {
.fault = ipa_nat_vma_fault_remap,
};
static int ipa_nat_open(struct inode *inode, struct file *filp)
{
struct ipa_nat_mem *nat_ctx;
IPADBG("\n");
nat_ctx = container_of(inode->i_cdev, struct ipa_nat_mem, cdev);
filp->private_data = nat_ctx;
IPADBG("return\n");
return 0;
}
static int ipa_nat_mmap(struct file *filp, struct vm_area_struct *vma)
{
unsigned long vsize = vma->vm_end - vma->vm_start;
struct ipa_nat_mem *nat_ctx = (struct ipa_nat_mem *)filp->private_data;
unsigned long phys_addr;
int result;
mutex_lock(&nat_ctx->lock);
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
if (nat_ctx->is_sys_mem) {
IPADBG("Mapping system memory\n");
if (nat_ctx->is_mapped) {
IPAERR("mapping already exists, only 1 supported\n");
result = -EINVAL;
goto bail;
}
IPADBG("map sz=0x%zx\n", nat_ctx->size);
result =
dma_mmap_coherent(
ipa_ctx->pdev, vma,
nat_ctx->vaddr, nat_ctx->dma_handle,
nat_ctx->size);
if (result) {
IPAERR("unable to map memory. Err:%d\n", result);
goto bail;
}
ipa_ctx->nat_mem.nat_base_address = nat_ctx->vaddr;
} else {
IPADBG("Mapping shared(local) memory\n");
IPADBG("map sz=0x%lx\n", vsize);
if ((IPA_NAT_PHYS_MEM_SIZE == 0) ||
(vsize > IPA_NAT_PHYS_MEM_SIZE)) {
result = -EINVAL;
goto bail;
}
phys_addr = ipa_ctx->ipa_wrapper_base +
ipa_ctx->ctrl->ipa_reg_base_ofst +
IPA_SRAM_DIRECT_ACCESS_N_OFST(IPA_NAT_PHYS_MEM_OFFSET);
if (remap_pfn_range(
vma, vma->vm_start,
phys_addr >> PAGE_SHIFT, vsize, vma->vm_page_prot)) {
IPAERR("remap failed\n");
result = -EAGAIN;
goto bail;
}
ipa_ctx->nat_mem.nat_base_address = (void *)vma->vm_start;
}
nat_ctx->is_mapped = true;
vma->vm_ops = &ipa_nat_remap_vm_ops;
IPADBG("return\n");
result = 0;
bail:
mutex_unlock(&nat_ctx->lock);
return result;
}
static const struct file_operations ipa_nat_fops = {
.owner = THIS_MODULE,
.open = ipa_nat_open,
.mmap = ipa_nat_mmap
};
/**
* allocate_temp_nat_memory() - Allocates temp nat memory
*
* Called during nat table delete
*/
void allocate_temp_nat_memory(void)
{
struct ipa_nat_mem *nat_ctx = &(ipa_ctx->nat_mem);
int gfp_flags = GFP_KERNEL | __GFP_ZERO;
nat_ctx->tmp_vaddr =
dma_alloc_coherent(ipa_ctx->pdev, IPA_NAT_TEMP_MEM_SIZE,
&nat_ctx->tmp_dma_handle, gfp_flags);
if (nat_ctx->tmp_vaddr == NULL) {
IPAERR("Temp Memory alloc failed\n");
nat_ctx->is_tmp_mem = false;
return;
}
nat_ctx->is_tmp_mem = true;
IPADBG("IPA NAT allocated temp memory successfully\n");
}
/**
* create_nat_device() - Create the NAT device
*
* Called during ipa init to create nat device
*
* Returns: 0 on success, negative on failure
*/
int create_nat_device(void)
{
struct ipa_nat_mem *nat_ctx = &(ipa_ctx->nat_mem);
int result;
IPADBG("\n");
mutex_lock(&nat_ctx->lock);
nat_ctx->class = class_create(THIS_MODULE, NAT_DEV_NAME);
if (IS_ERR(nat_ctx->class)) {
IPAERR("unable to create the class\n");
result = -ENODEV;
goto vaddr_alloc_fail;
}
result = alloc_chrdev_region(&nat_ctx->dev_num,
0,
1,
NAT_DEV_NAME);
if (result) {
IPAERR("alloc_chrdev_region err.\n");
result = -ENODEV;
goto alloc_chrdev_region_fail;
}
nat_ctx->dev =
device_create(nat_ctx->class, NULL, nat_ctx->dev_num, nat_ctx,
"%s", NAT_DEV_NAME);
if (IS_ERR(nat_ctx->dev)) {
IPAERR("device_create err:%ld\n", PTR_ERR(nat_ctx->dev));
result = -ENODEV;
goto device_create_fail;
}
cdev_init(&nat_ctx->cdev, &ipa_nat_fops);
nat_ctx->cdev.owner = THIS_MODULE;
nat_ctx->cdev.ops = &ipa_nat_fops;
result = cdev_add(&nat_ctx->cdev, nat_ctx->dev_num, 1);
if (result) {
IPAERR("cdev_add err=%d\n", -result);
goto cdev_add_fail;
}
IPADBG("ipa nat dev added successful. major:%d minor:%d\n",
MAJOR(nat_ctx->dev_num),
MINOR(nat_ctx->dev_num));
nat_ctx->is_dev = true;
allocate_temp_nat_memory();
IPADBG("IPA NAT device created successfully\n");
result = 0;
goto bail;
cdev_add_fail:
device_destroy(nat_ctx->class, nat_ctx->dev_num);
device_create_fail:
unregister_chrdev_region(nat_ctx->dev_num, 1);
alloc_chrdev_region_fail:
class_destroy(nat_ctx->class);
vaddr_alloc_fail:
if (nat_ctx->vaddr) {
IPADBG("Releasing system memory\n");
dma_free_coherent(
ipa_ctx->pdev, nat_ctx->size,
nat_ctx->vaddr, nat_ctx->dma_handle);
nat_ctx->vaddr = NULL;
nat_ctx->dma_handle = 0;
nat_ctx->size = 0;
}
bail:
mutex_unlock(&nat_ctx->lock);
return result;
}
/**
* ipa2_allocate_nat_device() - Allocates memory for the NAT device
* @mem: [in/out] memory parameters
*
* Called by NAT client driver to allocate memory for the NAT entries. Based on
* the request size either shared or system memory will be used.
*
* Returns: 0 on success, negative on failure
*/
int ipa2_allocate_nat_device(struct ipa_ioc_nat_alloc_mem *mem)
{
struct ipa_nat_mem *nat_ctx = &(ipa_ctx->nat_mem);
int gfp_flags = GFP_KERNEL | __GFP_ZERO;
int result;
IPADBG("passed memory size %zu\n", mem->size);
mutex_lock(&nat_ctx->lock);
if (strcmp(mem->dev_name, NAT_DEV_NAME)) {
IPAERR("Nat device name mismatch\n");
IPAERR("Expect: %s Recv: %s\n", NAT_DEV_NAME, mem->dev_name);
result = -EPERM;
goto bail;
}
if (nat_ctx->is_dev != true) {
IPAERR("Nat device not created successfully during boot up\n");
result = -EPERM;
goto bail;
}
if (nat_ctx->is_dev_init == true) {
IPAERR("Device already init\n");
result = 0;
goto bail;
}
if (mem->size <= 0 ||
nat_ctx->is_dev_init == true) {
IPAERR("Invalid Parameters or device is already init\n");
result = -EPERM;
goto bail;
}
if (mem->size > IPA_NAT_PHYS_MEM_SIZE) {
IPADBG("Allocating system memory\n");
nat_ctx->is_sys_mem = true;
nat_ctx->vaddr =
dma_alloc_coherent(ipa_ctx->pdev, mem->size,
&nat_ctx->dma_handle, gfp_flags);
if (nat_ctx->vaddr == NULL) {
IPAERR("memory alloc failed\n");
result = -ENOMEM;
goto bail;
}
nat_ctx->size = mem->size;
} else {
IPADBG("using shared(local) memory\n");
nat_ctx->is_sys_mem = false;
}
nat_ctx->is_dev_init = true;
IPADBG("IPA NAT dev init successfully\n");
result = 0;
bail:
mutex_unlock(&nat_ctx->lock);
return result;
}
/* IOCTL function handlers */
/**
* ipa2_nat_init_cmd() - Post IP_V4_NAT_INIT command to IPA HW
* @init: [in] initialization command attributes
*
* Called by NAT client driver to post IP_V4_NAT_INIT command to IPA HW
*
* Returns: 0 on success, negative on failure
*/
int ipa2_nat_init_cmd(struct ipa_ioc_v4_nat_init *init)
{
#define TBL_ENTRY_SIZE 32
#define INDX_TBL_ENTRY_SIZE 4
struct ipa_register_write *reg_write_nop;
struct ipa_desc desc[2];
struct ipa_ip_v4_nat_init *cmd;
u16 size = sizeof(struct ipa_ip_v4_nat_init);
int result;
u32 offset = 0;
size_t tmp;
IPADBG("\n");
if (init->table_entries == 0) {
IPADBG("Table entries is zero\n");
return -EPERM;
}
/* check for integer overflow */
if (init->ipv4_rules_offset >
UINT_MAX - (TBL_ENTRY_SIZE * (init->table_entries + 1))) {
IPAERR("Detected overflow\n");
return -EPERM;
}
/* Check Table Entry offset is not
* beyond allocated size
*/
tmp = init->ipv4_rules_offset +
(TBL_ENTRY_SIZE * (init->table_entries + 1));
if (tmp > ipa_ctx->nat_mem.size) {
IPAERR("Table rules offset not valid\n");
IPAERR("offset:%d entrys:%d size:%zu mem_size:%zu\n",
init->ipv4_rules_offset, (init->table_entries + 1),
tmp, ipa_ctx->nat_mem.size);
return -EPERM;
}
/* check for integer overflow */
if (init->expn_rules_offset >
UINT_MAX - (TBL_ENTRY_SIZE * init->expn_table_entries)) {
IPAERR("Detected overflow\n");
return -EPERM;
}
/* Check Expn Table Entry offset is not
* beyond allocated size
*/
tmp = init->expn_rules_offset +
(TBL_ENTRY_SIZE * init->expn_table_entries);
if (tmp > ipa_ctx->nat_mem.size) {
IPAERR("Expn Table rules offset not valid\n");
IPAERR("offset:%d entrys:%d size:%zu mem_size:%zu\n",
init->expn_rules_offset, init->expn_table_entries,
tmp, ipa_ctx->nat_mem.size);
return -EPERM;
}
/* check for integer overflow */
if (init->index_offset >
UINT_MAX - (INDX_TBL_ENTRY_SIZE * (init->table_entries + 1))) {
IPAERR("Detected overflow\n");
return -EPERM;
}
/* Check Indx Table Entry offset is not
* beyond allocated size
*/
tmp = init->index_offset +
(INDX_TBL_ENTRY_SIZE * (init->table_entries + 1));
if (tmp > ipa_ctx->nat_mem.size) {
IPAERR("Indx Table rules offset not valid\n");
IPAERR("offset:%d entrys:%d size:%zu mem_size:%zu\n",
init->index_offset, (init->table_entries + 1),
tmp, ipa_ctx->nat_mem.size);
return -EPERM;
}
/* check for integer overflow */
if (init->index_expn_offset >
UINT_MAX - (INDX_TBL_ENTRY_SIZE * init->expn_table_entries)) {
IPAERR("Detected overflow\n");
return -EPERM;
}
/* Check Expn Table entry offset is not
* beyond allocated size
*/
tmp = init->index_expn_offset +
(INDX_TBL_ENTRY_SIZE * init->expn_table_entries);
if (tmp > ipa_ctx->nat_mem.size) {
IPAERR("Indx Expn Table rules offset not valid\n");
IPAERR("offset:%d entrys:%d size:%zu mem_size:%zu\n",
init->index_expn_offset, init->expn_table_entries,
tmp, ipa_ctx->nat_mem.size);
return -EPERM;
}
memset(&desc, 0, sizeof(desc));
/* NO-OP IC for ensuring that IPA pipeline is empty */
reg_write_nop = kzalloc(sizeof(*reg_write_nop), GFP_KERNEL);
if (!reg_write_nop) {
IPAERR("no mem\n");
result = -ENOMEM;
goto bail;
}
reg_write_nop->skip_pipeline_clear = 0;
reg_write_nop->value_mask = 0x0;
desc[0].opcode = IPA_REGISTER_WRITE;
desc[0].type = IPA_IMM_CMD_DESC;
desc[0].callback = NULL;
desc[0].user1 = NULL;
desc[0].user2 = 0;
desc[0].pyld = (void *)reg_write_nop;
desc[0].len = sizeof(*reg_write_nop);
cmd = kmalloc(size, GFP_KERNEL);
if (!cmd) {
IPAERR("Failed to alloc immediate command object\n");
result = -ENOMEM;
goto free_nop;
}
if (ipa_ctx->nat_mem.vaddr) {
IPADBG("using system memory for nat table\n");
cmd->ipv4_rules_addr_type = IPA_NAT_SYSTEM_MEMORY;
cmd->ipv4_expansion_rules_addr_type = IPA_NAT_SYSTEM_MEMORY;
cmd->index_table_addr_type = IPA_NAT_SYSTEM_MEMORY;
cmd->index_table_expansion_addr_type = IPA_NAT_SYSTEM_MEMORY;
offset = UINT_MAX - ipa_ctx->nat_mem.dma_handle;
if ((init->ipv4_rules_offset > offset) ||
(init->expn_rules_offset > offset) ||
(init->index_offset > offset) ||
(init->index_expn_offset > offset)) {
IPAERR("Failed due to integer overflow\n");
IPAERR("nat.mem.dma_handle: 0x%pa\n",
&ipa_ctx->nat_mem.dma_handle);
IPAERR("ipv4_rules_offset: 0x%x\n",
init->ipv4_rules_offset);
IPAERR("expn_rules_offset: 0x%x\n",
init->expn_rules_offset);
IPAERR("index_offset: 0x%x\n",
init->index_offset);
IPAERR("index_expn_offset: 0x%x\n",
init->index_expn_offset);
result = -EPERM;
goto free_mem;
}
cmd->ipv4_rules_addr =
ipa_ctx->nat_mem.dma_handle + init->ipv4_rules_offset;
IPADBG("ipv4_rules_offset:0x%x\n", init->ipv4_rules_offset);
cmd->ipv4_expansion_rules_addr =
ipa_ctx->nat_mem.dma_handle + init->expn_rules_offset;
IPADBG("expn_rules_offset:0x%x\n", init->expn_rules_offset);
cmd->index_table_addr =
ipa_ctx->nat_mem.dma_handle + init->index_offset;
IPADBG("index_offset:0x%x\n", init->index_offset);
cmd->index_table_expansion_addr =
ipa_ctx->nat_mem.dma_handle + init->index_expn_offset;
IPADBG("index_expn_offset:0x%x\n", init->index_expn_offset);
} else {
IPADBG("using shared(local) memory for nat table\n");
cmd->ipv4_rules_addr_type = IPA_NAT_SHARED_MEMORY;
cmd->ipv4_expansion_rules_addr_type = IPA_NAT_SHARED_MEMORY;
cmd->index_table_addr_type = IPA_NAT_SHARED_MEMORY;
cmd->index_table_expansion_addr_type = IPA_NAT_SHARED_MEMORY;
cmd->ipv4_rules_addr = init->ipv4_rules_offset +
IPA_RAM_NAT_OFST;
cmd->ipv4_expansion_rules_addr = init->expn_rules_offset +
IPA_RAM_NAT_OFST;
cmd->index_table_addr = init->index_offset +
IPA_RAM_NAT_OFST;
cmd->index_table_expansion_addr = init->index_expn_offset +
IPA_RAM_NAT_OFST;
}
cmd->table_index = init->tbl_index;
IPADBG("Table index:0x%x\n", cmd->table_index);
cmd->size_base_tables = init->table_entries;
IPADBG("Base Table size:0x%x\n", cmd->size_base_tables);
cmd->size_expansion_tables = init->expn_table_entries;
IPADBG("Expansion Table size:0x%x\n", cmd->size_expansion_tables);
cmd->public_ip_addr = init->ip_addr;
IPADBG("Public ip address:0x%x\n", cmd->public_ip_addr);
desc[1].opcode = IPA_IP_V4_NAT_INIT;
desc[1].type = IPA_IMM_CMD_DESC;
desc[1].callback = NULL;
desc[1].user1 = NULL;
desc[1].user2 = 0;
desc[1].pyld = (void *)cmd;
desc[1].len = size;
IPADBG("posting v4 init command\n");
if (ipa_send_cmd(2, desc)) {
IPAERR("Fail to send immediate command\n");
result = -EPERM;
goto free_mem;
}
ipa_ctx->nat_mem.public_ip_addr = init->ip_addr;
IPADBG("Table ip address:0x%x", ipa_ctx->nat_mem.public_ip_addr);
ipa_ctx->nat_mem.ipv4_rules_addr =
(char *)ipa_ctx->nat_mem.nat_base_address + init->ipv4_rules_offset;
IPADBG("ipv4_rules_addr: 0x%p\n",
ipa_ctx->nat_mem.ipv4_rules_addr);
ipa_ctx->nat_mem.ipv4_expansion_rules_addr =
(char *)ipa_ctx->nat_mem.nat_base_address + init->expn_rules_offset;
IPADBG("ipv4_expansion_rules_addr: 0x%p\n",
ipa_ctx->nat_mem.ipv4_expansion_rules_addr);
ipa_ctx->nat_mem.index_table_addr =
(char *)ipa_ctx->nat_mem.nat_base_address + init->index_offset;
IPADBG("index_table_addr: 0x%p\n",
ipa_ctx->nat_mem.index_table_addr);
ipa_ctx->nat_mem.index_table_expansion_addr =
(char *)ipa_ctx->nat_mem.nat_base_address + init->index_expn_offset;
IPADBG("index_table_expansion_addr: 0x%p\n",
ipa_ctx->nat_mem.index_table_expansion_addr);
IPADBG("size_base_tables: %d\n", init->table_entries);
ipa_ctx->nat_mem.size_base_tables = init->table_entries;
IPADBG("size_expansion_tables: %d\n", init->expn_table_entries);
ipa_ctx->nat_mem.size_expansion_tables = init->expn_table_entries;
IPADBG("return\n");
result = 0;
free_mem:
kfree(cmd);
free_nop:
kfree(reg_write_nop);
bail:
return result;
}
/**
* ipa2_nat_dma_cmd() - Post NAT_DMA command to IPA HW
* @dma: [in] initialization command attributes
*
* Called by NAT client driver to post NAT_DMA command to IPA HW
*
* Returns: 0 on success, negative on failure
*/
int ipa2_nat_dma_cmd(struct ipa_ioc_nat_dma_cmd *dma)
{
#define NUM_OF_DESC 2
struct ipa_register_write *reg_write_nop = NULL;
struct ipa_nat_dma *cmd = NULL;
struct ipa_desc *desc = NULL;
u16 size = 0, cnt = 0;
int ret = 0;
IPADBG("\n");
if (dma->entries <= 0) {
IPAERR("Invalid number of commands %d\n",
dma->entries);
ret = -EPERM;
goto bail;
}
size = sizeof(struct ipa_desc) * NUM_OF_DESC;
desc = kzalloc(size, GFP_KERNEL);
if (desc == NULL) {
IPAERR("Failed to alloc memory\n");
ret = -ENOMEM;
goto bail;
}
size = sizeof(struct ipa_nat_dma);
cmd = kzalloc(size, GFP_KERNEL);
if (cmd == NULL) {
IPAERR("Failed to alloc memory\n");
ret = -ENOMEM;
goto bail;
}
/* NO-OP IC for ensuring that IPA pipeline is empty */
reg_write_nop = kzalloc(sizeof(*reg_write_nop), GFP_KERNEL);
if (!reg_write_nop) {
IPAERR("Failed to alloc memory\n");
ret = -ENOMEM;
goto bail;
}
reg_write_nop->skip_pipeline_clear = 0;
reg_write_nop->value_mask = 0x0;
desc[0].type = IPA_IMM_CMD_DESC;
desc[0].opcode = IPA_REGISTER_WRITE;
desc[0].callback = NULL;
desc[0].user1 = NULL;
desc[0].user2 = 0;
desc[0].len = sizeof(*reg_write_nop);
desc[0].pyld = (void *)reg_write_nop;
for (cnt = 0; cnt < dma->entries; cnt++) {
cmd->table_index = dma->dma[cnt].table_index;
cmd->base_addr = dma->dma[cnt].base_addr;
cmd->offset = dma->dma[cnt].offset;
cmd->data = dma->dma[cnt].data;
desc[1].type = IPA_IMM_CMD_DESC;
desc[1].opcode = IPA_NAT_DMA;
desc[1].callback = NULL;
desc[1].user1 = NULL;
desc[1].user2 = 0;
desc[1].len = sizeof(struct ipa_nat_dma);
desc[1].pyld = (void *)cmd;
ret = ipa_send_cmd(NUM_OF_DESC, desc);
if (ret == -EPERM)
IPAERR("Fail to send immediate command %d\n", cnt);
}
bail:
if (cmd != NULL)
kfree(cmd);
if (desc != NULL)
kfree(desc);
if (reg_write_nop != NULL)
kfree(reg_write_nop);
return ret;
}
/**
* ipa_nat_free_mem_and_device() - free the NAT memory and remove the device
* @nat_ctx: [in] the IPA NAT memory to free
*
* Called by NAT client driver to free the NAT memory and remove the device
*/
void ipa_nat_free_mem_and_device(struct ipa_nat_mem *nat_ctx)
{
IPADBG("\n");
mutex_lock(&nat_ctx->lock);
if (nat_ctx->is_sys_mem) {
IPADBG("freeing the dma memory\n");
dma_free_coherent(
ipa_ctx->pdev, nat_ctx->size,
nat_ctx->vaddr, nat_ctx->dma_handle);
nat_ctx->size = 0;
nat_ctx->vaddr = NULL;
}
nat_ctx->is_mapped = false;
nat_ctx->is_sys_mem = false;
nat_ctx->is_dev_init = false;
mutex_unlock(&nat_ctx->lock);
IPADBG("return\n");
}
/**
* ipa2_nat_del_cmd() - Delete a NAT table
* @del: [in] delete table table table parameters
*
* Called by NAT client driver to delete the nat table
*
* Returns: 0 on success, negative on failure
*/
int ipa2_nat_del_cmd(struct ipa_ioc_v4_nat_del *del)
{
struct ipa_register_write *reg_write_nop;
struct ipa_desc desc[2];
struct ipa_ip_v4_nat_init *cmd;
u16 size = sizeof(struct ipa_ip_v4_nat_init);
u8 mem_type = IPA_NAT_SHARED_MEMORY;
u32 base_addr = IPA_NAT_PHYS_MEM_OFFSET;
int result;
IPADBG("\n");
if (ipa_ctx->nat_mem.is_tmp_mem) {
IPAERR("using temp memory during nat del\n");
mem_type = IPA_NAT_SYSTEM_MEMORY;
base_addr = ipa_ctx->nat_mem.tmp_dma_handle;
}
if (del->public_ip_addr == 0) {
IPADBG("Bad Parameter\n");
result = -EPERM;
goto bail;
}
memset(&desc, 0, sizeof(desc));
/* NO-OP IC for ensuring that IPA pipeline is empty */
reg_write_nop = kzalloc(sizeof(*reg_write_nop), GFP_KERNEL);
if (!reg_write_nop) {
IPAERR("no mem\n");
result = -ENOMEM;
goto bail;
}
reg_write_nop->skip_pipeline_clear = 0;
reg_write_nop->value_mask = 0x0;
desc[0].opcode = IPA_REGISTER_WRITE;
desc[0].type = IPA_IMM_CMD_DESC;
desc[0].callback = NULL;
desc[0].user1 = NULL;
desc[0].user2 = 0;
desc[0].pyld = (void *)reg_write_nop;
desc[0].len = sizeof(*reg_write_nop);
cmd = kmalloc(size, GFP_KERNEL);
if (cmd == NULL) {
IPAERR("Failed to alloc immediate command object\n");
result = -ENOMEM;
goto free_nop;
}
cmd->table_index = del->table_index;
cmd->ipv4_rules_addr = base_addr;
cmd->ipv4_rules_addr_type = mem_type;
cmd->ipv4_expansion_rules_addr = base_addr;
cmd->ipv4_expansion_rules_addr_type = mem_type;
cmd->index_table_addr = base_addr;
cmd->index_table_addr_type = mem_type;
cmd->index_table_expansion_addr = base_addr;
cmd->index_table_expansion_addr_type = mem_type;
cmd->size_base_tables = 0;
cmd->size_expansion_tables = 0;
cmd->public_ip_addr = 0;
desc[1].opcode = IPA_IP_V4_NAT_INIT;
desc[1].type = IPA_IMM_CMD_DESC;
desc[1].callback = NULL;
desc[1].user1 = NULL;
desc[1].user2 = 0;
desc[1].pyld = (void *)cmd;
desc[1].len = size;
if (ipa_send_cmd(2, desc)) {
IPAERR("Fail to send immediate command\n");
result = -EPERM;
goto free_mem;
}
ipa_ctx->nat_mem.size_base_tables = 0;
ipa_ctx->nat_mem.size_expansion_tables = 0;
ipa_ctx->nat_mem.public_ip_addr = 0;
ipa_ctx->nat_mem.ipv4_rules_addr = 0;
ipa_ctx->nat_mem.ipv4_expansion_rules_addr = 0;
ipa_ctx->nat_mem.index_table_addr = 0;
ipa_ctx->nat_mem.index_table_expansion_addr = 0;
ipa_nat_free_mem_and_device(&ipa_ctx->nat_mem);
IPADBG("return\n");
result = 0;
free_mem:
kfree(cmd);
free_nop:
kfree(reg_write_nop);
bail:
return result;
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,280 @@
/* Copyright (c) 2013-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef IPA_QMI_SERVICE_H
#define IPA_QMI_SERVICE_H
#include <linux/ipa.h>
#include <linux/ipa_qmi_service_v01.h>
#include <uapi/linux/msm_rmnet.h>
#include <soc/qcom/msm_qmi_interface.h>
#include "ipa_i.h"
#include <linux/rmnet_ipa_fd_ioctl.h>
/**
* name of the DL wwan default routing tables for v4 and v6
*/
#define IPA_A7_QMAP_HDR_NAME "ipa_qmap_hdr"
#define IPA_DFLT_WAN_RT_TBL_NAME "ipa_dflt_wan_rt"
#define MAX_NUM_Q6_RULE 35
#define MAX_NUM_QMI_RULE_CACHE 10
#define DEV_NAME "ipa-wan"
#define SUBSYS_MODEM "modem"
#define IPAWANDBG(fmt, args...) \
pr_debug(DEV_NAME " %s:%d " fmt, __func__, __LINE__, ## args)
#define IPAWANERR(fmt, args...) \
pr_err(DEV_NAME " %s:%d " fmt, __func__, __LINE__, ## args)
extern struct ipa_qmi_context *ipa_qmi_ctx;
extern struct mutex ipa_qmi_lock;
struct ipa_qmi_context {
struct ipa_ioc_ext_intf_prop q6_ul_filter_rule[MAX_NUM_Q6_RULE];
u32 q6_ul_filter_rule_hdl[MAX_NUM_Q6_RULE];
int num_ipa_install_fltr_rule_req_msg;
struct ipa_install_fltr_rule_req_msg_v01
ipa_install_fltr_rule_req_msg_cache[MAX_NUM_QMI_RULE_CACHE];
int num_ipa_fltr_installed_notif_req_msg;
struct ipa_fltr_installed_notif_req_msg_v01
ipa_fltr_installed_notif_req_msg_cache[MAX_NUM_QMI_RULE_CACHE];
bool modem_cfg_emb_pipe_flt;
};
struct rmnet_mux_val {
uint32_t mux_id;
int8_t vchannel_name[IFNAMSIZ];
bool mux_channel_set;
bool ul_flt_reg;
bool mux_hdr_set;
uint32_t hdr_hdl;
};
extern struct elem_info ipa_init_modem_driver_req_msg_data_v01_ei[];
extern struct elem_info ipa_init_modem_driver_resp_msg_data_v01_ei[];
extern struct elem_info ipa_indication_reg_req_msg_data_v01_ei[];
extern struct elem_info ipa_indication_reg_resp_msg_data_v01_ei[];
extern struct elem_info ipa_master_driver_init_complt_ind_msg_data_v01_ei[];
extern struct elem_info ipa_install_fltr_rule_req_msg_data_v01_ei[];
extern struct elem_info ipa_install_fltr_rule_resp_msg_data_v01_ei[];
extern struct elem_info ipa_fltr_installed_notif_req_msg_data_v01_ei[];
extern struct elem_info ipa_fltr_installed_notif_resp_msg_data_v01_ei[];
extern struct elem_info ipa_enable_force_clear_datapath_req_msg_data_v01_ei[];
extern struct elem_info ipa_enable_force_clear_datapath_resp_msg_data_v01_ei[];
extern struct elem_info ipa_disable_force_clear_datapath_req_msg_data_v01_ei[];
extern struct elem_info ipa_disable_force_clear_datapath_resp_msg_data_v01_ei[];
extern struct elem_info ipa_config_req_msg_data_v01_ei[];
extern struct elem_info ipa_config_resp_msg_data_v01_ei[];
extern struct elem_info ipa_get_data_stats_req_msg_data_v01_ei[];
extern struct elem_info ipa_get_data_stats_resp_msg_data_v01_ei[];
extern struct elem_info ipa_get_apn_data_stats_req_msg_data_v01_ei[];
extern struct elem_info ipa_get_apn_data_stats_resp_msg_data_v01_ei[];
extern struct elem_info ipa_set_data_usage_quota_req_msg_data_v01_ei[];
extern struct elem_info ipa_set_data_usage_quota_resp_msg_data_v01_ei[];
extern struct elem_info ipa_data_usage_quota_reached_ind_msg_data_v01_ei[];
extern struct elem_info ipa_stop_data_usage_quota_req_msg_data_v01_ei[];
extern struct elem_info ipa_stop_data_usage_quota_resp_msg_data_v01_ei[];
/**
* struct ipa_rmnet_context - IPA rmnet context
* @ipa_rmnet_ssr: support modem SSR
* @polling_interval: Requested interval for polling tethered statistics
* @metered_mux_id: The mux ID on which quota has been set
*/
struct ipa_rmnet_context {
bool ipa_rmnet_ssr;
u64 polling_interval;
u32 metered_mux_id;
};
extern struct ipa_rmnet_context ipa_rmnet_ctx;
#ifdef CONFIG_RMNET_IPA
int ipa_qmi_service_init(uint32_t wan_platform_type);
void ipa_qmi_service_exit(void);
/* sending filter-install-request to modem*/
int qmi_filter_request_send(struct ipa_install_fltr_rule_req_msg_v01 *req);
/* sending filter-installed-notify-request to modem*/
int qmi_filter_notify_send(struct ipa_fltr_installed_notif_req_msg_v01 *req);
/* voting for bus BW to ipa_rm*/
int vote_for_bus_bw(uint32_t *bw_mbps);
int qmi_enable_force_clear_datapath_send(
struct ipa_enable_force_clear_datapath_req_msg_v01 *req);
int qmi_disable_force_clear_datapath_send(
struct ipa_disable_force_clear_datapath_req_msg_v01 *req);
int copy_ul_filter_rule_to_ipa(struct ipa_install_fltr_rule_req_msg_v01
*rule_req, uint32_t *rule_hdl);
int wwan_update_mux_channel_prop(void);
int wan_ioctl_init(void);
void wan_ioctl_stop_qmi_messages(void);
void wan_ioctl_enable_qmi_messages(void);
void wan_ioctl_deinit(void);
void ipa_qmi_stop_workqueues(void);
int rmnet_ipa_poll_tethering_stats(struct wan_ioctl_poll_tethering_stats *data);
int rmnet_ipa_set_data_quota(struct wan_ioctl_set_data_quota *data);
void ipa_broadcast_quota_reach_ind(uint32_t mux_id);
int rmnet_ipa_set_tether_client_pipe(struct wan_ioctl_set_tether_client_pipe
*data);
int rmnet_ipa_query_tethering_stats(struct wan_ioctl_query_tether_stats *data,
bool reset);
int ipa_qmi_get_data_stats(struct ipa_get_data_stats_req_msg_v01 *req,
struct ipa_get_data_stats_resp_msg_v01 *resp);
int ipa_qmi_get_network_stats(struct ipa_get_apn_data_stats_req_msg_v01 *req,
struct ipa_get_apn_data_stats_resp_msg_v01 *resp);
int ipa_qmi_set_data_quota(struct ipa_set_data_usage_quota_req_msg_v01 *req);
int ipa_qmi_stop_data_qouta(void);
void ipa_q6_handshake_complete(bool ssr_bootup);
void ipa_qmi_init(void);
void ipa_qmi_cleanup(void);
#else /* CONFIG_RMNET_IPA */
static inline int ipa_qmi_service_init(uint32_t wan_platform_type)
{
return -EPERM;
}
static inline void ipa_qmi_service_exit(void) { }
/* sending filter-install-request to modem*/
static inline int qmi_filter_request_send(
struct ipa_install_fltr_rule_req_msg_v01 *req)
{
return -EPERM;
}
/* sending filter-installed-notify-request to modem*/
static inline int qmi_filter_notify_send(
struct ipa_fltr_installed_notif_req_msg_v01 *req)
{
return -EPERM;
}
static inline int qmi_enable_force_clear_datapath_send(
struct ipa_enable_force_clear_datapath_req_msg_v01 *req)
{
return -EPERM;
}
static inline int qmi_disable_force_clear_datapath_send(
struct ipa_disable_force_clear_datapath_req_msg_v01 *req)
{
return -EPERM;
}
static inline int copy_ul_filter_rule_to_ipa(
struct ipa_install_fltr_rule_req_msg_v01 *rule_req, uint32_t *rule_hdl)
{
return -EPERM;
}
static inline int wwan_update_mux_channel_prop(void)
{
return -EPERM;
}
static inline int wan_ioctl_init(void)
{
return -EPERM;
}
static inline void wan_ioctl_stop_qmi_messages(void) { }
static inline void wan_ioctl_enable_qmi_messages(void) { }
static inline void wan_ioctl_deinit(void) { }
static inline void ipa_qmi_stop_workqueues(void) { }
static inline int vote_for_bus_bw(uint32_t *bw_mbps)
{
return -EPERM;
}
static inline int rmnet_ipa_poll_tethering_stats(
struct wan_ioctl_poll_tethering_stats *data)
{
return -EPERM;
}
static inline int rmnet_ipa_set_data_quota(
struct wan_ioctl_set_data_quota *data)
{
return -EPERM;
}
static inline void ipa_broadcast_quota_reach_ind(uint32_t mux_id) { }
static inline int ipa_qmi_get_data_stats(
struct ipa_get_data_stats_req_msg_v01 *req,
struct ipa_get_data_stats_resp_msg_v01 *resp)
{
return -EPERM;
}
static inline int ipa_qmi_get_network_stats(
struct ipa_get_apn_data_stats_req_msg_v01 *req,
struct ipa_get_apn_data_stats_resp_msg_v01 *resp)
{
return -EPERM;
}
static inline int ipa_qmi_set_data_quota(
struct ipa_set_data_usage_quota_req_msg_v01 *req)
{
return -EPERM;
}
static inline int ipa_qmi_stop_data_qouta(void)
{
return -EPERM;
}
static inline void ipa_q6_handshake_complete(bool ssr_bootup) { }
static inline void ipa_qmi_init(void)
{
}
static inline void ipa_qmi_cleanup(void)
{
}
#endif /* CONFIG_RMNET_IPA */
#endif /* IPA_QMI_SERVICE_H */

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,560 @@
/* Copyright (c) 2012-2015, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _IPA_RAM_MMAP_H_
#define _IPA_RAM_MMAP_H_
/*
* This header defines the memory map of the IPA RAM (not all SRAM is
* available for SW use)
* In case of restricted bytes the actual starting address will be
* advanced by the number of needed bytes
*/
#define IPA_RAM_NAT_OFST 0
#define IPA_RAM_NAT_SIZE 0
#define IPA_MEM_v1_RAM_HDR_OFST (IPA_RAM_NAT_OFST + IPA_RAM_NAT_SIZE)
#define IPA_MEM_v1_RAM_HDR_SIZE 1664
#define IPA_MEM_v1_RAM_V4_FLT_OFST (IPA_MEM_v1_RAM_HDR_OFST +\
IPA_MEM_v1_RAM_HDR_SIZE)
#define IPA_MEM_v1_RAM_V4_FLT_SIZE 2176
#define IPA_MEM_v1_RAM_V4_RT_OFST (IPA_MEM_v1_RAM_V4_FLT_OFST +\
IPA_MEM_v1_RAM_V4_FLT_SIZE)
#define IPA_MEM_v1_RAM_V4_RT_SIZE 512
#define IPA_MEM_v1_RAM_V6_FLT_OFST (IPA_MEM_v1_RAM_V4_RT_OFST +\
IPA_MEM_v1_RAM_V4_RT_SIZE)
#define IPA_MEM_v1_RAM_V6_FLT_SIZE 1792
#define IPA_MEM_v1_RAM_V6_RT_OFST (IPA_MEM_v1_RAM_V6_FLT_OFST +\
IPA_MEM_v1_RAM_V6_FLT_SIZE)
#define IPA_MEM_v1_RAM_V6_RT_SIZE 512
#define IPA_MEM_v1_RAM_END_OFST (IPA_MEM_v1_RAM_V6_RT_OFST +\
IPA_MEM_v1_RAM_V6_RT_SIZE)
#define IPA_MEM_RAM_V6_RT_SIZE_DDR 16384
#define IPA_MEM_RAM_V4_RT_SIZE_DDR 16384
#define IPA_MEM_RAM_V6_FLT_SIZE_DDR 16384
#define IPA_MEM_RAM_V4_FLT_SIZE_DDR 16384
#define IPA_MEM_RAM_HDR_PROC_CTX_SIZE_DDR 0
#define IPA_MEM_CANARY_SIZE 4
#define IPA_MEM_CANARY_VAL 0xdeadbeef
#define IPA_MEM_RAM_MODEM_NETWORK_STATS_SIZE 256
/*
* IPA v2.0 and v2.1 SRAM memory layout:
* +-------------+
* | V4 FLT HDR |
* +-------------+
* | CANARY |
* +-------------+
* | CANARY |
* +-------------+
* | V6 FLT HDR |
* +-------------+
* | CANARY |
* +-------------+
* | CANARY |
* +-------------+
* | V4 RT HDR |
* +-------------+
* | CANARY |
* +-------------+
* | V6 RT HDR |
* +-------------+
* | CANARY |
* +-------------+
* | MODEM HDR |
* +-------------+
* | APPS HDR |
* +-------------+
* | CANARY |
* +-------------+
* | MODEM MEM |
* +-------------+
* | CANARY |
* +-------------+
* | APPS V4 FLT |
* +-------------+
* | APPS V6 FLT |
* +-------------+
* | CANARY |
* +-------------+
* | UC INFO |
* +-------------+
*/
#define IPA_MEM_v2_RAM_OFST_START 128
#define IPA_MEM_v2_RAM_V4_FLT_OFST IPA_MEM_v2_RAM_OFST_START
#define IPA_MEM_v2_RAM_V4_FLT_SIZE 88
/* V4 filtering header table is 8B aligned */
#if (IPA_MEM_v2_RAM_V4_FLT_OFST & 7)
#error V4 filtering header table is not 8B aligned
#endif
#define IPA_MEM_v2_RAM_V6_FLT_OFST (IPA_MEM_v2_RAM_V4_FLT_OFST + \
IPA_MEM_v2_RAM_V4_FLT_SIZE + 2*IPA_MEM_CANARY_SIZE)
#define IPA_MEM_v2_RAM_V6_FLT_SIZE 88
/* V6 filtering header table is 8B aligned */
#if (IPA_MEM_v2_RAM_V6_FLT_OFST & 7)
#error V6 filtering header table is not 8B aligned
#endif
#define IPA_MEM_v2_RAM_V4_RT_OFST (IPA_MEM_v2_RAM_V6_FLT_OFST + \
IPA_MEM_v2_RAM_V6_FLT_SIZE + 2*IPA_MEM_CANARY_SIZE)
#define IPA_MEM_v2_RAM_V4_NUM_INDEX 11
#define IPA_MEM_v2_V4_MODEM_RT_INDEX_LO 0
#define IPA_MEM_v2_V4_MODEM_RT_INDEX_HI 3
#define IPA_MEM_v2_V4_APPS_RT_INDEX_LO 4
#define IPA_MEM_v2_V4_APPS_RT_INDEX_HI 10
#define IPA_MEM_v2_RAM_V4_RT_SIZE (IPA_MEM_v2_RAM_V4_NUM_INDEX * 4)
/* V4 routing header table is 8B aligned */
#if (IPA_MEM_v2_RAM_V4_RT_OFST & 7)
#error V4 routing header table is not 8B aligned
#endif
#define IPA_MEM_v2_RAM_V6_RT_OFST (IPA_MEM_v2_RAM_V4_RT_OFST + \
IPA_MEM_v2_RAM_V4_RT_SIZE + IPA_MEM_CANARY_SIZE)
#define IPA_MEM_v2_RAM_V6_NUM_INDEX 11
#define IPA_MEM_v2_V6_MODEM_RT_INDEX_LO 0
#define IPA_MEM_v2_V6_MODEM_RT_INDEX_HI 3
#define IPA_MEM_v2_V6_APPS_RT_INDEX_LO 4
#define IPA_MEM_v2_V6_APPS_RT_INDEX_HI 10
#define IPA_MEM_v2_RAM_V6_RT_SIZE (IPA_MEM_v2_RAM_V6_NUM_INDEX * 4)
/* V6 routing header table is 8B aligned */
#if (IPA_MEM_v2_RAM_V6_RT_OFST & 7)
#error V6 routing header table is not 8B aligned
#endif
#define IPA_MEM_v2_RAM_MODEM_HDR_OFST (IPA_MEM_v2_RAM_V6_RT_OFST + \
IPA_MEM_v2_RAM_V6_RT_SIZE + IPA_MEM_CANARY_SIZE)
#define IPA_MEM_v2_RAM_MODEM_HDR_SIZE 320
/* header table is 8B aligned */
#if (IPA_MEM_v2_RAM_MODEM_HDR_OFST & 7)
#error header table is not 8B aligned
#endif
#define IPA_MEM_v2_RAM_APPS_HDR_OFST (IPA_MEM_v2_RAM_MODEM_HDR_OFST + \
IPA_MEM_v2_RAM_MODEM_HDR_SIZE)
#define IPA_MEM_v2_RAM_APPS_HDR_SIZE 72
/* header table is 8B aligned */
#if (IPA_MEM_v2_RAM_APPS_HDR_OFST & 7)
#error header table is not 8B aligned
#endif
#define IPA_MEM_v2_RAM_MODEM_OFST (IPA_MEM_v2_RAM_APPS_HDR_OFST + \
IPA_MEM_v2_RAM_APPS_HDR_SIZE + IPA_MEM_CANARY_SIZE)
#define IPA_MEM_v2_RAM_MODEM_SIZE 3532
/* modem memory is 4B aligned */
#if (IPA_MEM_v2_RAM_MODEM_OFST & 3)
#error modem memory is not 4B aligned
#endif
#define IPA_MEM_v2_RAM_APPS_V4_FLT_OFST (IPA_MEM_v2_RAM_MODEM_OFST + \
IPA_MEM_v2_RAM_MODEM_SIZE + IPA_MEM_CANARY_SIZE)
#define IPA_MEM_v2_RAM_APPS_V4_FLT_SIZE 1920
/* filtering rule is 4B aligned */
#if (IPA_MEM_v2_RAM_APPS_V4_FLT_OFST & 3)
#error filtering rule is not 4B aligned
#endif
#define IPA_MEM_v2_RAM_APPS_V6_FLT_OFST (IPA_MEM_v2_RAM_APPS_V4_FLT_OFST + \
IPA_MEM_v2_RAM_APPS_V4_FLT_SIZE)
#define IPA_MEM_v2_RAM_APPS_V6_FLT_SIZE 1372
/* filtering rule is 4B aligned */
#if (IPA_MEM_v2_RAM_APPS_V6_FLT_OFST & 3)
#error filtering rule is not 4B aligned
#endif
#define IPA_MEM_v2_RAM_UC_INFO_OFST (IPA_MEM_v2_RAM_APPS_V6_FLT_OFST + \
IPA_MEM_v2_RAM_APPS_V6_FLT_SIZE + IPA_MEM_CANARY_SIZE)
#define IPA_MEM_v2_RAM_UC_INFO_SIZE 292
/* uC info 4B aligned */
#if (IPA_MEM_v2_RAM_UC_INFO_OFST & 3)
#error uC info is not 4B aligned
#endif
#define IPA_MEM_v2_RAM_END_OFST (IPA_MEM_v2_RAM_UC_INFO_OFST + \
IPA_MEM_v2_RAM_UC_INFO_SIZE)
#define IPA_MEM_v2_RAM_APPS_V4_RT_OFST IPA_MEM_v2_RAM_END_OFST
#define IPA_MEM_v2_RAM_APPS_V4_RT_SIZE 0
#define IPA_MEM_v2_RAM_APPS_V6_RT_OFST IPA_MEM_v2_RAM_END_OFST
#define IPA_MEM_v2_RAM_APPS_V6_RT_SIZE 0
#define IPA_MEM_v2_RAM_HDR_SIZE_DDR 4096
/*
* IPA v2.5/v2.6 SRAM memory layout:
* +----------------+
* | UC INFO |
* +----------------+
* | CANARY |
* +----------------+
* | CANARY |
* +----------------+
* | V4 FLT HDR |
* +----------------+
* | CANARY |
* +----------------+
* | CANARY |
* +----------------+
* | V6 FLT HDR |
* +----------------+
* | CANARY |
* +----------------+
* | CANARY |
* +----------------+
* | V4 RT HDR |
* +----------------+
* | CANARY |
* +----------------+
* | V6 RT HDR |
* +----------------+
* | CANARY |
* +----------------+
* | MODEM HDR |
* +----------------+
* | APPS HDR |
* +----------------+
* | CANARY |
* +----------------+
* | CANARY |
* +----------------+
* | MODEM PROC CTX |
* +----------------+
* | APPS PROC CTX |
* +----------------+
* | CANARY |
* +----------------+
* | MODEM MEM |
* +----------------+
* | CANARY |
* +----------------+
*/
#define IPA_MEM_v2_5_RAM_UC_MEM_SIZE 128
#define IPA_MEM_v2_5_RAM_UC_INFO_OFST IPA_MEM_v2_5_RAM_UC_MEM_SIZE
#define IPA_MEM_v2_5_RAM_UC_INFO_SIZE 512
/* uC info 4B aligned */
#if (IPA_MEM_v2_5_RAM_UC_INFO_OFST & 3)
#error uC info is not 4B aligned
#endif
#define IPA_MEM_v2_5_RAM_OFST_START (IPA_MEM_v2_5_RAM_UC_INFO_OFST + \
IPA_MEM_v2_5_RAM_UC_INFO_SIZE)
#define IPA_MEM_v2_5_RAM_V4_FLT_OFST (IPA_MEM_v2_5_RAM_OFST_START + \
2 * IPA_MEM_CANARY_SIZE)
#define IPA_MEM_v2_5_RAM_V4_FLT_SIZE 88
/* V4 filtering header table is 8B aligned */
#if (IPA_MEM_v2_5_RAM_V4_FLT_OFST & 7)
#error V4 filtering header table is not 8B aligned
#endif
#define IPA_MEM_v2_5_RAM_V6_FLT_OFST (IPA_MEM_v2_5_RAM_V4_FLT_OFST + \
IPA_MEM_v2_5_RAM_V4_FLT_SIZE + 2 * IPA_MEM_CANARY_SIZE)
#define IPA_MEM_v2_5_RAM_V6_FLT_SIZE 88
/* V6 filtering header table is 8B aligned */
#if (IPA_MEM_v2_5_RAM_V6_FLT_OFST & 7)
#error V6 filtering header table is not 8B aligned
#endif
#define IPA_MEM_v2_5_RAM_V4_RT_OFST (IPA_MEM_v2_5_RAM_V6_FLT_OFST + \
IPA_MEM_v2_5_RAM_V6_FLT_SIZE + 2 * IPA_MEM_CANARY_SIZE)
#define IPA_MEM_v2_5_RAM_V4_NUM_INDEX 15
#define IPA_MEM_v2_5_V4_MODEM_RT_INDEX_LO 0
#define IPA_MEM_v2_5_V4_MODEM_RT_INDEX_HI 6
#define IPA_MEM_v2_5_V4_APPS_RT_INDEX_LO \
(IPA_MEM_v2_5_V4_MODEM_RT_INDEX_HI + 1)
#define IPA_MEM_v2_5_V4_APPS_RT_INDEX_HI \
(IPA_MEM_v2_5_RAM_V4_NUM_INDEX - 1)
#define IPA_MEM_v2_5_RAM_V4_RT_SIZE (IPA_MEM_v2_5_RAM_V4_NUM_INDEX * 4)
/* V4 routing header table is 8B aligned */
#if (IPA_MEM_v2_5_RAM_V4_RT_OFST & 7)
#error V4 routing header table is not 8B aligned
#endif
#define IPA_MEM_v2_5_RAM_V6_RT_OFST (IPA_MEM_v2_5_RAM_V4_RT_OFST + \
IPA_MEM_v2_5_RAM_V4_RT_SIZE + IPA_MEM_CANARY_SIZE)
#define IPA_MEM_v2_5_RAM_V6_NUM_INDEX 15
#define IPA_MEM_v2_5_V6_MODEM_RT_INDEX_LO 0
#define IPA_MEM_v2_5_V6_MODEM_RT_INDEX_HI 6
#define IPA_MEM_v2_5_V6_APPS_RT_INDEX_LO \
(IPA_MEM_v2_5_V6_MODEM_RT_INDEX_HI + 1)
#define IPA_MEM_v2_5_V6_APPS_RT_INDEX_HI \
(IPA_MEM_v2_5_RAM_V6_NUM_INDEX - 1)
#define IPA_MEM_v2_5_RAM_V6_RT_SIZE (IPA_MEM_v2_5_RAM_V6_NUM_INDEX * 4)
/* V6 routing header table is 8B aligned */
#if (IPA_MEM_v2_5_RAM_V6_RT_OFST & 7)
#error V6 routing header table is not 8B aligned
#endif
#define IPA_MEM_v2_5_RAM_MODEM_HDR_OFST (IPA_MEM_v2_5_RAM_V6_RT_OFST + \
IPA_MEM_v2_5_RAM_V6_RT_SIZE + IPA_MEM_CANARY_SIZE)
#define IPA_MEM_v2_5_RAM_MODEM_HDR_SIZE 320
/* header table is 8B aligned */
#if (IPA_MEM_v2_5_RAM_MODEM_HDR_OFST & 7)
#error header table is not 8B aligned
#endif
#define IPA_MEM_v2_5_RAM_APPS_HDR_OFST (IPA_MEM_v2_5_RAM_MODEM_HDR_OFST + \
IPA_MEM_v2_5_RAM_MODEM_HDR_SIZE)
#define IPA_MEM_v2_5_RAM_APPS_HDR_SIZE 0
/* header table is 8B aligned */
#if (IPA_MEM_v2_5_RAM_APPS_HDR_OFST & 7)
#error header table is not 8B aligned
#endif
#define IPA_MEM_v2_5_RAM_MODEM_HDR_PROC_CTX_OFST \
(IPA_MEM_v2_5_RAM_APPS_HDR_OFST + IPA_MEM_v2_5_RAM_APPS_HDR_SIZE + \
2 * IPA_MEM_CANARY_SIZE)
#define IPA_MEM_v2_5_RAM_MODEM_HDR_PROC_CTX_SIZE 512
/* header processing context table is 8B aligned */
#if (IPA_MEM_v2_5_RAM_MODEM_HDR_PROC_CTX_OFST & 7)
#error header processing context table is not 8B aligned
#endif
#define IPA_MEM_v2_5_RAM_APPS_HDR_PROC_CTX_OFST \
(IPA_MEM_v2_5_RAM_MODEM_HDR_PROC_CTX_OFST + \
IPA_MEM_v2_5_RAM_MODEM_HDR_PROC_CTX_SIZE)
#define IPA_MEM_v2_5_RAM_APPS_HDR_PROC_CTX_SIZE 512
/* header processing context table is 8B aligned */
#if (IPA_MEM_v2_5_RAM_APPS_HDR_PROC_CTX_OFST & 7)
#error header processing context table is not 8B aligned
#endif
#define IPA_MEM_v2_5_RAM_MODEM_OFST (IPA_MEM_v2_5_RAM_APPS_HDR_PROC_CTX_OFST + \
IPA_MEM_v2_5_RAM_APPS_HDR_PROC_CTX_SIZE + IPA_MEM_CANARY_SIZE)
#define IPA_MEM_v2_5_RAM_MODEM_SIZE 5800
/* modem memory is 4B aligned */
#if (IPA_MEM_v2_5_RAM_MODEM_OFST & 3)
#error modem memory is not 4B aligned
#endif
#define IPA_MEM_v2_5_RAM_APPS_V4_FLT_OFST (IPA_MEM_v2_5_RAM_MODEM_OFST + \
IPA_MEM_v2_5_RAM_MODEM_SIZE)
#define IPA_MEM_v2_5_RAM_APPS_V4_FLT_SIZE 0
/* filtering rule is 4B aligned */
#if (IPA_MEM_v2_5_RAM_APPS_V4_FLT_OFST & 3)
#error filtering rule is not 4B aligned
#endif
#define IPA_MEM_v2_5_RAM_APPS_V6_FLT_OFST (IPA_MEM_v2_5_RAM_APPS_V4_FLT_OFST + \
IPA_MEM_v2_5_RAM_APPS_V4_FLT_SIZE)
#define IPA_MEM_v2_5_RAM_APPS_V6_FLT_SIZE 0
/* filtering rule is 4B aligned */
#if (IPA_MEM_v2_5_RAM_APPS_V6_FLT_OFST & 3)
#error filtering rule is not 4B aligned
#endif
#define IPA_MEM_v2_5_RAM_END_OFST (IPA_MEM_v2_5_RAM_APPS_V6_FLT_OFST + \
IPA_MEM_v2_5_RAM_APPS_V6_FLT_SIZE + IPA_MEM_CANARY_SIZE)
#define IPA_MEM_v2_5_RAM_APPS_V4_RT_OFST IPA_MEM_v2_5_RAM_END_OFST
#define IPA_MEM_v2_5_RAM_APPS_V4_RT_SIZE 0
#define IPA_MEM_v2_5_RAM_APPS_V6_RT_OFST IPA_MEM_v2_5_RAM_END_OFST
#define IPA_MEM_v2_5_RAM_APPS_V6_RT_SIZE 0
#define IPA_MEM_v2_5_RAM_HDR_SIZE_DDR 2048
/*
* IPA v2.6Lite SRAM memory layout:
* +----------------+
* | UC INFO |
* +----------------+
* | CANARY |
* +----------------+
* | CANARY |
* +----------------+
* | V4 FLT HDR |
* +----------------+
* | CANARY |
* +----------------+
* | CANARY |
* +----------------+
* | V6 FLT HDR |
* +----------------+
* | CANARY |
* +----------------+
* | CANARY |
* +----------------+
* | V4 RT HDR |
* +----------------+
* | CANARY |
* +----------------+
* | V6 RT HDR |
* +----------------+
* | CANARY |
* +----------------+
* | MODEM HDR |
* +----------------+
* | CANARY |
* +----------------+
* | CANARY |
* +----------------+
* | COMP / DECOMP |
* +----------------+
* | CANARY |
* +----------------+
* | MODEM MEM |
* +----------------+
* | CANARY |
* +----------------+
*/
#define IPA_MEM_v2_6L_RAM_UC_MEM_SIZE 128
#define IPA_MEM_v2_6L_RAM_UC_INFO_OFST IPA_MEM_v2_6L_RAM_UC_MEM_SIZE
#define IPA_MEM_v2_6L_RAM_UC_INFO_SIZE 512
/* uC info 4B aligned */
#if (IPA_MEM_v2_6L_RAM_UC_INFO_OFST & 3)
#error uC info is not 4B aligned
#endif
#define IPA_MEM_v2_6L_RAM_OFST_START (IPA_MEM_v2_6L_RAM_UC_INFO_OFST + \
IPA_MEM_v2_6L_RAM_UC_INFO_SIZE)
#define IPA_MEM_v2_6L_RAM_V4_FLT_OFST (IPA_MEM_v2_6L_RAM_OFST_START + \
2 * IPA_MEM_CANARY_SIZE)
#define IPA_MEM_v2_6L_RAM_V4_FLT_SIZE 88
/* V4 filtering header table is 8B aligned */
#if (IPA_MEM_v2_6L_RAM_V4_FLT_OFST & 7)
#error V4 filtering header table is not 8B aligned
#endif
#define IPA_MEM_v2_6L_RAM_V6_FLT_OFST (IPA_MEM_v2_6L_RAM_V4_FLT_OFST + \
IPA_MEM_v2_6L_RAM_V4_FLT_SIZE + 2 * IPA_MEM_CANARY_SIZE)
#define IPA_MEM_v2_6L_RAM_V6_FLT_SIZE 88
/* V6 filtering header table is 8B aligned */
#if (IPA_MEM_v2_6L_RAM_V6_FLT_OFST & 7)
#error V6 filtering header table is not 8B aligned
#endif
#define IPA_MEM_v2_6L_RAM_V4_RT_OFST (IPA_MEM_v2_6L_RAM_V6_FLT_OFST + \
IPA_MEM_v2_6L_RAM_V6_FLT_SIZE + 2 * IPA_MEM_CANARY_SIZE)
#define IPA_MEM_v2_6L_RAM_V4_NUM_INDEX 15
#define IPA_MEM_v2_6L_V4_MODEM_RT_INDEX_LO 0
#define IPA_MEM_v2_6L_V4_MODEM_RT_INDEX_HI 6
#define IPA_MEM_v2_6L_V4_APPS_RT_INDEX_LO \
(IPA_MEM_v2_6L_V4_MODEM_RT_INDEX_HI + 1)
#define IPA_MEM_v2_6L_V4_APPS_RT_INDEX_HI \
(IPA_MEM_v2_6L_RAM_V4_NUM_INDEX - 1)
#define IPA_MEM_v2_6L_RAM_V4_RT_SIZE (IPA_MEM_v2_6L_RAM_V4_NUM_INDEX * 4)
/* V4 routing header table is 8B aligned */
#if (IPA_MEM_v2_6L_RAM_V4_RT_OFST & 7)
#error V4 routing header table is not 8B aligned
#endif
#define IPA_MEM_v2_6L_RAM_V6_RT_OFST (IPA_MEM_v2_6L_RAM_V4_RT_OFST + \
IPA_MEM_v2_6L_RAM_V4_RT_SIZE + IPA_MEM_CANARY_SIZE)
#define IPA_MEM_v2_6L_RAM_V6_NUM_INDEX 15
#define IPA_MEM_v2_6L_V6_MODEM_RT_INDEX_LO 0
#define IPA_MEM_v2_6L_V6_MODEM_RT_INDEX_HI 6
#define IPA_MEM_v2_6L_V6_APPS_RT_INDEX_LO \
(IPA_MEM_v2_6L_V6_MODEM_RT_INDEX_HI + 1)
#define IPA_MEM_v2_6L_V6_APPS_RT_INDEX_HI \
(IPA_MEM_v2_6L_RAM_V6_NUM_INDEX - 1)
#define IPA_MEM_v2_6L_RAM_V6_RT_SIZE (IPA_MEM_v2_6L_RAM_V6_NUM_INDEX * 4)
/* V6 routing header table is 8B aligned */
#if (IPA_MEM_v2_6L_RAM_V6_RT_OFST & 7)
#error V6 routing header table is not 8B aligned
#endif
#define IPA_MEM_v2_6L_RAM_MODEM_HDR_OFST (IPA_MEM_v2_6L_RAM_V6_RT_OFST + \
IPA_MEM_v2_6L_RAM_V6_RT_SIZE + IPA_MEM_CANARY_SIZE)
#define IPA_MEM_v2_6L_RAM_MODEM_HDR_SIZE 320
/* header table is 8B aligned */
#if (IPA_MEM_v2_6L_RAM_MODEM_HDR_OFST & 7)
#error header table is not 8B aligned
#endif
#define IPA_MEM_v2_6L_RAM_APPS_HDR_OFST (IPA_MEM_v2_6L_RAM_MODEM_HDR_OFST + \
IPA_MEM_v2_6L_RAM_MODEM_HDR_SIZE)
#define IPA_MEM_v2_6L_RAM_APPS_HDR_SIZE 0
/* header table is 8B aligned */
#if (IPA_MEM_v2_6L_RAM_APPS_HDR_OFST & 7)
#error header table is not 8B aligned
#endif
#define IPA_MEM_v2_6L_RAM_MODEM_COMP_DECOMP_OFST \
(IPA_MEM_v2_6L_RAM_APPS_HDR_OFST + IPA_MEM_v2_6L_RAM_APPS_HDR_SIZE + \
2 * IPA_MEM_CANARY_SIZE)
#define IPA_MEM_v2_6L_RAM_MODEM_COMP_DECOMP_SIZE 512
/* comp/decomp memory region is 8B aligned */
#if (IPA_MEM_v2_6L_RAM_MODEM_COMP_DECOMP_OFST & 7)
#error header processing context table is not 8B aligned
#endif
#define IPA_MEM_v2_6L_RAM_MODEM_OFST \
(IPA_MEM_v2_6L_RAM_MODEM_COMP_DECOMP_OFST + \
IPA_MEM_v2_6L_RAM_MODEM_COMP_DECOMP_SIZE + IPA_MEM_CANARY_SIZE)
#define IPA_MEM_v2_6L_RAM_MODEM_SIZE 6376
/* modem memory is 4B aligned */
#if (IPA_MEM_v2_6L_RAM_MODEM_OFST & 3)
#error modem memory is not 4B aligned
#endif
#define IPA_MEM_v2_6L_RAM_APPS_V4_FLT_OFST (IPA_MEM_v2_6L_RAM_MODEM_OFST + \
IPA_MEM_v2_6L_RAM_MODEM_SIZE)
#define IPA_MEM_v2_6L_RAM_APPS_V4_FLT_SIZE 0
/* filtering rule is 4B aligned */
#if (IPA_MEM_v2_6L_RAM_APPS_V4_FLT_OFST & 3)
#error filtering rule is not 4B aligned
#endif
#define IPA_MEM_v2_6L_RAM_APPS_V6_FLT_OFST \
(IPA_MEM_v2_6L_RAM_APPS_V4_FLT_OFST + \
IPA_MEM_v2_6L_RAM_APPS_V4_FLT_SIZE)
#define IPA_MEM_v2_6L_RAM_APPS_V6_FLT_SIZE 0
/* filtering rule is 4B aligned */
#if (IPA_MEM_v2_6L_RAM_APPS_V6_FLT_OFST & 3)
#error filtering rule is not 4B aligned
#endif
#define IPA_MEM_v2_6L_RAM_END_OFST (IPA_MEM_v2_6L_RAM_APPS_V6_FLT_OFST + \
IPA_MEM_v2_6L_RAM_APPS_V6_FLT_SIZE + IPA_MEM_CANARY_SIZE)
#define IPA_MEM_v2_6L_RAM_APPS_V4_RT_OFST IPA_MEM_v2_6L_RAM_END_OFST
#define IPA_MEM_v2_6L_RAM_APPS_V4_RT_SIZE 0
#define IPA_MEM_v2_6L_RAM_APPS_V6_RT_OFST IPA_MEM_v2_6L_RAM_END_OFST
#define IPA_MEM_v2_6L_RAM_APPS_V6_RT_SIZE 0
#define IPA_MEM_v2_6L_RAM_HDR_SIZE_DDR 2048
#endif /* _IPA_RAM_MMAP_H_ */

View File

@@ -0,0 +1,319 @@
/* Copyright (c) 2012-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef __IPA_REG_H__
#define __IPA_REG_H__
/*
* IPA's BAM specific registers
* Used for IPA HW 1.0 only
*/
#define IPA_BAM_REG_BASE_OFST 0x00004000
#define IPA_BAM_CNFG_BITS_OFST 0x7c
#define IPA_BAM_REMAP_SIZE (0x1000)
#define IPA_FILTER_FILTER_EN_BMSK 0x1
#define IPA_FILTER_FILTER_EN_SHFT 0x0
#define IPA_AGGREGATION_SPARE_REG_2_OFST 0x00002094
#define IPA_AGGREGATION_QCNCM_SIG0_SHFT 16
#define IPA_AGGREGATION_QCNCM_SIG1_SHFT 8
#define IPA_AGGREGATION_SPARE_REG_1_OFST 0x00002090
#define IPA_AGGREGATION_SPARE_REG_2_OFST 0x00002094
#define IPA_AGGREGATION_SINGLE_NDP_MSK 0x1
#define IPA_AGGREGATION_SINGLE_NDP_BMSK 0xfffffffe
#define IPA_AGGREGATION_MODE_MSK 0x1
#define IPA_AGGREGATION_MODE_SHFT 31
#define IPA_AGGREGATION_MODE_BMSK 0x7fffffff
#define IPA_AGGREGATION_QCNCM_SIG_BMSK 0xff000000
#define IPA_FILTER_FILTER_EN_BMSK 0x1
#define IPA_FILTER_FILTER_EN_SHFT 0x0
#define IPA_AGGREGATION_HW_TIMER_FIX_MBIM_AGGR_SHFT 2
#define IPA_AGGREGATION_HW_TIMER_FIX_MBIM_AGGR_BMSK 0x4
#define IPA_HEAD_OF_LINE_BLOCK_EN_OFST 0x00000044
/*
* End of IPA 1.0 Registers
*/
/*
* IPA HW 2.0 Registers
*/
#define IPA_REG_BASE 0x0
#define IPA_IRQ_STTS_EE_n_ADDR(n) (IPA_REG_BASE + 0x00001008 + 0x1000 * (n))
#define IPA_IRQ_STTS_EE_n_MAXn 3
#define IPA_IRQ_EN_EE_n_ADDR(n) (IPA_REG_BASE + 0x0000100c + 0x1000 * (n))
#define IPA_IRQ_EN_EE_n_MAXn 3
#define IPA_IRQ_CLR_EE_n_ADDR(n) (IPA_REG_BASE + 0x00001010 + 0x1000 * (n))
#define IPA_IRQ_CLR_EE_n_MAXn 3
#define IPA_IRQ_SUSPEND_INFO_EE_n_ADDR(n) \
(IPA_REG_BASE + 0x00001098 + 0x1000 * (n))
#define IPA_IRQ_SUSPEND_INFO_EE_n_MAXn 3
/*
* End of IPA 2.0 Registers
*/
/*
* IPA HW 2.5 Registers
*/
#define IPA_BCR_OFST 0x000005B0
#define IPA_COUNTER_CFG_OFST 0x000005E8
#define IPA_COUNTER_CFG_EOT_COAL_GRAN_BMSK 0xF
#define IPA_COUNTER_CFG_EOT_COAL_GRAN_SHFT 0x0
#define IPA_COUNTER_CFG_AGGR_GRAN_BMSK 0x1F0
#define IPA_COUNTER_CFG_AGGR_GRAN_SHFT 0x4
/*
* End of IPA 2.5 Registers
*/
/*
* IPA HW 2.6/2.6L Registers
*/
#define IPA_ENABLED_PIPES_OFST 0x000005DC
#define IPA_YELLOW_MARKER_SYS_CFG_OFST 0x00000728
/*
* End of IPA 2.6/2.6L Registers
*/
/*
* Common Registers
*/
#define IPA_REG_BASE_OFST_v2_0 0x00020000
#define IPA_REG_BASE_OFST_v2_5 0x00040000
#define IPA_REG_BASE_OFST_v2_6L IPA_REG_BASE_OFST_v2_5
#define IPA_COMP_SW_RESET_OFST 0x0000003c
#define IPA_VERSION_OFST 0x00000034
#define IPA_COMP_HW_VERSION_OFST 0x00000030
#define IPA_SHARED_MEM_SIZE_OFST_v1_1 0x00000050
#define IPA_SHARED_MEM_SIZE_OFST_v2_0 0x00000050
#define IPA_SHARED_MEM_SIZE_SHARED_MEM_BADDR_BMSK_v2_0 0xffff0000
#define IPA_SHARED_MEM_SIZE_SHARED_MEM_BADDR_SHFT_v2_0 0x10
#define IPA_SHARED_MEM_SIZE_SHARED_MEM_SIZE_BMSK_v2_0 0xffff
#define IPA_SHARED_MEM_SIZE_SHARED_MEM_SIZE_SHFT_v2_0 0x0
#define IPA_ENDP_INIT_AGGR_N_OFST_v1_1(n) (0x000001c0 + 0x4 * (n))
#define IPA_ENDP_INIT_AGGR_N_OFST_v2_0(n) (0x00000320 + 0x4 * (n))
#define IPA_ENDP_INIT_ROUTE_N_OFST_v1_1(n) (0x00000220 + 0x4 * (n))
#define IPA_ENDP_INIT_ROUTE_N_OFST_v2_0(n) (0x00000370 + 0x4 * (n))
#define IPA_ENDP_INIT_ROUTE_N_ROUTE_TABLE_INDEX_BMSK 0x1f
#define IPA_ENDP_INIT_ROUTE_N_ROUTE_TABLE_INDEX_SHFT 0x0
#define IPA_ROUTE_OFST_v1_1 0x00000044
#define IPA_ROUTE_ROUTE_DIS_SHFT 0x0
#define IPA_ROUTE_ROUTE_DIS_BMSK 0x1
#define IPA_ROUTE_ROUTE_DEF_PIPE_SHFT 0x1
#define IPA_ROUTE_ROUTE_DEF_PIPE_BMSK 0x3e
#define IPA_ROUTE_ROUTE_DEF_HDR_TABLE_SHFT 0x6
#define IPA_ROUTE_ROUTE_DEF_HDR_OFST_SHFT 0x7
#define IPA_ROUTE_ROUTE_DEF_HDR_OFST_BMSK 0x1ff80
#define IPA_ROUTE_ROUTE_FRAG_DEF_PIPE_BMSK 0x3e0000
#define IPA_ROUTE_ROUTE_FRAG_DEF_PIPE_SHFT 0x11
#define IPA_FILTER_OFST_v1_1 0x00000048
#define IPA_SRAM_DIRECT_ACCESS_N_OFST_v1_1(n) (0x00004000 + 0x4 * (n))
#define IPA_SRAM_DIRECT_ACCESS_N_OFST_v2_0(n) (0x00005000 + 0x4 * (n))
#define IPA_SRAM_DIRECT_ACCESS_N_OFST(n) (0x00004000 + 0x4 * (n))
#define IPA_SRAM_SW_FIRST_v2_5 0x00005000
#define IPA_ROUTE_ROUTE_DEF_HDR_TABLE_BMSK 0x40
#define IPA_ENDP_INIT_NAT_N_NAT_EN_SHFT 0x0
#define IPA_COMP_CFG_OFST 0x00000038
#define IPA_ENDP_INIT_AGGR_n_AGGR_FORCE_CLOSE_BMSK 0x1
#define IPA_ENDP_INIT_AGGR_n_AGGR_FORCE_CLOSE_SHFT 0x16
#define IPA_ENDP_INIT_AGGR_n_AGGR_SW_EOF_ACTIVE_BMSK 0x200000
#define IPA_ENDP_INIT_AGGR_n_AGGR_SW_EOF_ACTIVE_SHFT 0x15
#define IPA_ENDP_INIT_AGGR_n_AGGR_PKT_LIMIT_BMSK 0x1f8000
#define IPA_ENDP_INIT_AGGR_n_AGGR_PKT_LIMIT_SHFT 0xf
#define IPA_ENDP_INIT_AGGR_N_AGGR_TIME_LIMIT_BMSK 0x7c00
#define IPA_ENDP_INIT_AGGR_N_AGGR_TIME_LIMIT_SHFT 0xa
#define IPA_ENDP_INIT_AGGR_N_AGGR_BYTE_LIMIT_BMSK 0x3e0
#define IPA_ENDP_INIT_AGGR_N_AGGR_BYTE_LIMIT_SHFT 0x5
#define IPA_ENDP_INIT_AGGR_N_AGGR_TYPE_BMSK 0x1c
#define IPA_ENDP_INIT_AGGR_N_AGGR_TYPE_SHFT 0x2
#define IPA_ENDP_INIT_AGGR_N_AGGR_EN_BMSK 0x3
#define IPA_ENDP_INIT_AGGR_N_AGGR_EN_SHFT 0x0
#define IPA_ENDP_INIT_MODE_N_OFST_v1_1(n) (0x00000170 + 0x4 * (n))
#define IPA_ENDP_INIT_MODE_N_OFST_v2_0(n) (0x000002c0 + 0x4 * (n))
#define IPA_ENDP_INIT_MODE_N_RMSK 0x7f
#define IPA_ENDP_INIT_MODE_N_MAX 19
#define IPA_ENDP_INIT_MODE_N_DEST_PIPE_INDEX_BMSK_v1_1 0x7c
#define IPA_ENDP_INIT_MODE_N_DEST_PIPE_INDEX_SHFT_v1_1 0x2
#define IPA_ENDP_INIT_MODE_N_DEST_PIPE_INDEX_BMSK_v2_0 0x1f0
#define IPA_ENDP_INIT_MODE_N_DEST_PIPE_INDEX_SHFT_v2_0 0x4
#define IPA_ENDP_INIT_MODE_N_MODE_BMSK 0x7
#define IPA_ENDP_INIT_MODE_N_MODE_SHFT 0x0
#define IPA_ENDP_INIT_HDR_N_OFST_v1_1(n) (0x00000120 + 0x4 * (n))
#define IPA_ENDP_INIT_HDR_N_OFST_v2_0(n) (0x00000170 + 0x4 * (n))
#define IPA_ENDP_INIT_HDR_N_HDR_LEN_BMSK 0x3f
#define IPA_ENDP_INIT_HDR_N_HDR_LEN_SHFT 0x0
#define IPA_ENDP_INIT_HDR_N_HDR_ADDITIONAL_CONST_LEN_BMSK 0x7e000
#define IPA_ENDP_INIT_HDR_N_HDR_ADDITIONAL_CONST_LEN_SHFT 0xd
#define IPA_ENDP_INIT_HDR_N_HDR_OFST_PKT_SIZE_BMSK 0x3f00000
#define IPA_ENDP_INIT_HDR_N_HDR_OFST_PKT_SIZE_SHFT 0x14
#define IPA_ENDP_INIT_HDR_N_HDR_OFST_PKT_SIZE_VALID_BMSK 0x80000
#define IPA_ENDP_INIT_HDR_N_HDR_OFST_PKT_SIZE_VALID_SHFT 0x13
#define IPA_ENDP_INIT_HDR_N_HDR_METADATA_REG_VALID_BMSK_v2 0x10000000
#define IPA_ENDP_INIT_HDR_N_HDR_METADATA_REG_VALID_SHFT_v2 0x1c
#define IPA_ENDP_INIT_HDR_N_HDR_LEN_INC_DEAGG_HDR_BMSK_v2 0x8000000
#define IPA_ENDP_INIT_HDR_N_HDR_LEN_INC_DEAGG_HDR_SHFT_v2 0x1b
#define IPA_ENDP_INIT_HDR_N_HDR_A5_MUX_BMSK 0x4000000
#define IPA_ENDP_INIT_HDR_N_HDR_A5_MUX_SHFT 0x1a
#define IPA_ENDP_INIT_HDR_N_HDR_OFST_METADATA_VALID_BMSK 0x40
#define IPA_ENDP_INIT_HDR_N_HDR_OFST_METADATA_VALID_SHFT 0x6
#define IPA_ENDP_INIT_HDR_N_HDR_OFST_METADATA_SHFT 0x7
#define IPA_ENDP_INIT_HDR_N_HDR_OFST_METADATA_BMSK 0x1f80
#define IPA_ENDP_INIT_NAT_N_OFST_v1_1(n) (0x000000c0 + 0x4 * (n))
#define IPA_ENDP_INIT_NAT_N_OFST_v2_0(n) (0x00000120 + 0x4 * (n))
#define IPA_ENDP_INIT_NAT_N_NAT_EN_BMSK 0x3
#define IPA_ENDP_INIT_NAT_N_NAT_EN_SHFT 0x0
#define IPA_ENDP_INIT_HDR_EXT_n_OFST_v2_0(n) (0x000001c0 + 0x4 * (n))
#define IPA_ENDP_INIT_HDR_EXT_n_HDR_ENDIANNESS_BMSK 0x1
#define IPA_ENDP_INIT_HDR_EXT_n_HDR_ENDIANNESS_SHFT 0x0
#define IPA_ENDP_INIT_HDR_EXT_n_HDR_TOTAL_LEN_OR_PAD_VALID_BMSK 0x2
#define IPA_ENDP_INIT_HDR_EXT_n_HDR_TOTAL_LEN_OR_PAD_VALID_SHFT 0x1
#define IPA_ENDP_INIT_HDR_EXT_n_HDR_TOTAL_LEN_OR_PAD_BMSK 0x4
#define IPA_ENDP_INIT_HDR_EXT_n_HDR_TOTAL_LEN_OR_PAD_SHFT 0x2
#define IPA_ENDP_INIT_HDR_EXT_n_HDR_PAYLOAD_LEN_INC_PADDING_BMSK 0x8
#define IPA_ENDP_INIT_HDR_EXT_n_HDR_PAYLOAD_LEN_INC_PADDING_SHFT 0x3
#define IPA_ENDP_INIT_HDR_EXT_n_HDR_TOTAL_LEN_OR_PAD_OFFSET_BMSK 0x3f0
#define IPA_ENDP_INIT_HDR_EXT_n_HDR_TOTAL_LEN_OR_PAD_OFFSET_SHFT 0x4
#define IPA_ENDP_INIT_HDR_EXT_n_HDR_PAD_TO_ALIGNMENT_BMSK_v2_0 0x1c00
#define IPA_ENDP_INIT_HDR_EXT_n_HDR_PAD_TO_ALIGNMENT_SHFT 0xa
#define IPA_ENDP_INIT_HDR_EXT_n_HDR_PAD_TO_ALIGNMENT_BMSK_v2_5 0x3c00
/*
* IPA HW 1.1 specific Registers
*/
#define IPA_FILTER_FILTER_DIS_BMSK 0x1
#define IPA_FILTER_FILTER_DIS_SHFT 0x0
#define IPA_SINGLE_NDP_MODE_OFST 0x00000064
#define IPA_QCNCM_OFST 0x00000060
#define IPA_ENDP_INIT_CTRL_N_OFST(n) (0x00000070 + 0x4 * (n))
#define IPA_ENDP_INIT_CTRL_N_RMSK 0x1
#define IPA_ENDP_INIT_CTRL_N_MAX 19
#define IPA_ENDP_INIT_CTRL_N_ENDP_SUSPEND_BMSK 0x1
#define IPA_ENDP_INIT_CTRL_N_ENDP_SUSPEND_SHFT 0x0
#define IPA_ENDP_INIT_CTRL_N_ENDP_DELAY_BMSK 0x2
#define IPA_ENDP_INIT_CTRL_N_ENDP_DELAY_SHFT 0x1
#define IPA_ENDP_INIT_HOL_BLOCK_EN_N_OFST_v1_1(n) (0x00000270 + 0x4 * (n))
#define IPA_ENDP_INIT_HOL_BLOCK_EN_N_OFST_v2_0(n) (0x000003c0 + 0x4 * (n))
#define IPA_ENDP_INIT_HOL_BLOCK_EN_N_RMSK 0x1
#define IPA_ENDP_INIT_HOL_BLOCK_EN_N_MAX 19
#define IPA_ENDP_INIT_HOL_BLOCK_EN_N_EN_BMSK 0x1
#define IPA_ENDP_INIT_HOL_BLOCK_EN_N_EN_SHFT 0x0
#define IPA_ENDP_INIT_DEAGGR_n_OFST_v2_0(n) (0x00000470 + 0x04 * (n))
#define IPA_ENDP_INIT_DEAGGR_n_DEAGGR_HDR_LEN_BMSK 0x3F
#define IPA_ENDP_INIT_DEAGGR_n_DEAGGR_HDR_LEN_SHFT 0x0
#define IPA_ENDP_INIT_DEAGGR_n_PACKET_OFFSET_VALID_BMSK 0x40
#define IPA_ENDP_INIT_DEAGGR_n_PACKET_OFFSET_VALID_SHFT 0x6
#define IPA_ENDP_INIT_DEAGGR_n_PACKET_OFFSET_LOCATION_BMSK 0x3F00
#define IPA_ENDP_INIT_DEAGGR_n_PACKET_OFFSET_LOCATION_SHFT 0x8
#define IPA_ENDP_INIT_DEAGGR_n_MAX_PACKET_LEN_BMSK 0xFFFF0000
#define IPA_ENDP_INIT_DEAGGR_n_MAX_PACKET_LEN_SHFT 0x10
#define IPA_ENDP_INIT_HOL_BLOCK_TIMER_N_OFST_v1_1(n) (0x000002c0 + 0x4 * (n))
#define IPA_ENDP_INIT_HOL_BLOCK_TIMER_N_OFST_v2_0(n) (0x00000420 + 0x4 * (n))
#define IPA_ENDP_INIT_HOL_BLOCK_TIMER_N_RMSK 0x1ff
#define IPA_ENDP_INIT_HOL_BLOCK_TIMER_N_MAX 19
#define IPA_ENDP_INIT_HOL_BLOCK_TIMER_N_TIMER_BMSK 0x1ff
#define IPA_ENDP_INIT_HOL_BLOCK_TIMER_N_TIMER_SHFT 0x0
#define IPA_DEBUG_CNT_REG_N_OFST_v1_1(n) (0x00000340 + 0x4 * (n))
#define IPA_DEBUG_CNT_REG_N_OFST_v2_0(n) (0x00000600 + 0x4 * (n))
#define IPA_DEBUG_CNT_REG_N_RMSK 0xffffffff
#define IPA_DEBUG_CNT_REG_N_MAX 15
#define IPA_DEBUG_CNT_REG_N_DBG_CNT_REG_BMSK 0xffffffff
#define IPA_DEBUG_CNT_REG_N_DBG_CNT_REG_SHFT 0x0
#define IPA_DEBUG_CNT_CTRL_N_OFST_v1_1(n) (0x00000380 + 0x4 * (n))
#define IPA_DEBUG_CNT_CTRL_N_OFST_v2_0(n) (0x00000640 + 0x4 * (n))
#define IPA_DEBUG_CNT_CTRL_N_RMSK 0x1ff1f171
#define IPA_DEBUG_CNT_CTRL_N_MAX 15
#define IPA_DEBUG_CNT_CTRL_N_DBG_CNT_RULE_INDEX_BMSK 0x1ff00000
#define IPA_DEBUG_CNT_CTRL_N_DBG_CNT_RULE_INDEX_SHFT 0x14
#define IPA_DEBUG_CNT_CTRL_N_DBG_CNT_SOURCE_PIPE_BMSK 0x1f000
#define IPA_DEBUG_CNT_CTRL_N_DBG_CNT_SOURCE_PIPE_SHFT 0xc
#define IPA_DEBUG_CNT_CTRL_N_DBG_CNT_PRODUCT_BMSK 0x100
#define IPA_DEBUG_CNT_CTRL_N_DBG_CNT_PRODUCT_SHFT 0x8
#define IPA_DEBUG_CNT_CTRL_N_DBG_CNT_TYPE_BMSK 0x70
#define IPA_DEBUG_CNT_CTRL_N_DBG_CNT_TYPE_SHFT 0x4
#define IPA_DEBUG_CNT_CTRL_N_DBG_CNT_EN_BMSK 0x1
#define IPA_DEBUG_CNT_CTRL_N_DBG_CNT_EN_SHFT 0x0
#define IPA_ENDP_STATUS_n_OFST(n) (0x000004c0 + 0x4 * (n))
#define IPA_ENDP_STATUS_n_STATUS_ENDP_BMSK 0x3e
#define IPA_ENDP_STATUS_n_STATUS_ENDP_SHFT 0x1
#define IPA_ENDP_STATUS_n_STATUS_EN_BMSK 0x1
#define IPA_ENDP_STATUS_n_STATUS_EN_SHFT 0x0
#define IPA_ENDP_INIT_CFG_n_OFST(n) (0x000000c0 + 0x4 * (n))
#define IPA_ENDP_INIT_CFG_n_RMSK 0x7f
#define IPA_ENDP_INIT_CFG_n_MAXn 19
#define IPA_ENDP_INIT_CFG_n_CS_METADATA_HDR_OFFSET_BMSK 0x78
#define IPA_ENDP_INIT_CFG_n_CS_METADATA_HDR_OFFSET_SHFT 0x3
#define IPA_ENDP_INIT_CFG_n_CS_OFFLOAD_EN_BMSK 0x6
#define IPA_ENDP_INIT_CFG_n_CS_OFFLOAD_EN_SHFT 0x1
#define IPA_ENDP_INIT_CFG_n_FRAG_OFFLOAD_EN_BMSK 0x1
#define IPA_ENDP_INIT_CFG_n_FRAG_OFFLOAD_EN_SHFT 0x0
#define IPA_ENDP_INIT_HDR_METADATA_MASK_n_OFST(n) (0x00000220 + 0x4 * (n))
#define IPA_ENDP_INIT_HDR_METADATA_MASK_n_RMSK 0xffffffff
#define IPA_ENDP_INIT_HDR_METADATA_MASK_n_MAXn 19
#define IPA_ENDP_INIT_HDR_METADATA_MASK_n_METADATA_MASK_BMSK 0xffffffff
#define IPA_ENDP_INIT_HDR_METADATA_MASK_n_METADATA_MASK_SHFT 0x0
#define IPA_ENDP_INIT_HDR_METADATA_n_OFST(n) (0x00000270 + 0x4 * (n))
#define IPA_ENDP_INIT_HDR_METADATA_n_MUX_ID_BMASK 0xFF0000
#define IPA_ENDP_INIT_HDR_METADATA_n_MUX_ID_SHFT 0x10
#define IPA_IRQ_EE_UC_n_OFFS(n) (0x0000101c + 0x1000 * (n))
#define IPA_IRQ_EE_UC_n_RMSK 0x1
#define IPA_IRQ_EE_UC_n_MAXn 3
#define IPA_IRQ_EE_UC_n_INT_BMSK 0x1
#define IPA_IRQ_EE_UC_n_INT_SHFT 0x0
#define IPA_UC_MAILBOX_m_n_OFFS(m, n) (0x0001a000 + 0x80 * (m) + 0x4 * (n))
#define IPA_UC_MAILBOX_m_n_OFFS_v2_5(m, n) (0x00022000 + 0x80 * (m) + 0x4 * (n))
#define IPA_SYS_PKT_PROC_CNTXT_BASE_OFST (0x000005d8)
#define IPA_LOCAL_PKT_PROC_CNTXT_BASE_OFST (0x000005e0)
#endif

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,152 @@
/* Copyright (c) 2015-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#undef TRACE_SYSTEM
#define TRACE_SYSTEM ipa
#define TRACE_INCLUDE_FILE ipa_trace
#if !defined(_IPA_TRACE_H) || defined(TRACE_HEADER_MULTI_READ)
#define _IPA_TRACE_H
#include <linux/tracepoint.h>
TRACE_EVENT(
intr_to_poll,
TP_PROTO(unsigned long client),
TP_ARGS(client),
TP_STRUCT__entry(
__field(unsigned long, client)
),
TP_fast_assign(
__entry->client = client;
),
TP_printk("client=%lu", __entry->client)
);
TRACE_EVENT(
poll_to_intr,
TP_PROTO(unsigned long client),
TP_ARGS(client),
TP_STRUCT__entry(
__field(unsigned long, client)
),
TP_fast_assign(
__entry->client = client;
),
TP_printk("client=%lu", __entry->client)
);
TRACE_EVENT(
idle_sleep_enter,
TP_PROTO(unsigned long client),
TP_ARGS(client),
TP_STRUCT__entry(
__field(unsigned long, client)
),
TP_fast_assign(
__entry->client = client;
),
TP_printk("client=%lu", __entry->client)
);
TRACE_EVENT(
idle_sleep_exit,
TP_PROTO(unsigned long client),
TP_ARGS(client),
TP_STRUCT__entry(
__field(unsigned long, client)
),
TP_fast_assign(
__entry->client = client;
),
TP_printk("client=%lu", __entry->client)
);
TRACE_EVENT(
rmnet_ipa_netifni,
TP_PROTO(unsigned long rx_pkt_cnt),
TP_ARGS(rx_pkt_cnt),
TP_STRUCT__entry(
__field(unsigned long, rx_pkt_cnt)
),
TP_fast_assign(
__entry->rx_pkt_cnt = rx_pkt_cnt;
),
TP_printk("rx_pkt_cnt=%lu", __entry->rx_pkt_cnt)
);
TRACE_EVENT(
rmnet_ipa_netifrx,
TP_PROTO(unsigned long rx_pkt_cnt),
TP_ARGS(rx_pkt_cnt),
TP_STRUCT__entry(
__field(unsigned long, rx_pkt_cnt)
),
TP_fast_assign(
__entry->rx_pkt_cnt = rx_pkt_cnt;
),
TP_printk("rx_pkt_cnt=%lu", __entry->rx_pkt_cnt)
);
TRACE_EVENT(
rmnet_ipa_netif_rcv_skb,
TP_PROTO(unsigned long rx_pkt_cnt),
TP_ARGS(rx_pkt_cnt),
TP_STRUCT__entry(
__field(unsigned long, rx_pkt_cnt)
),
TP_fast_assign(
__entry->rx_pkt_cnt = rx_pkt_cnt;
),
TP_printk("rx_pkt_cnt=%lu", __entry->rx_pkt_cnt)
);
#endif /* _IPA_TRACE_H */
/* This part must be outside protection */
#undef TRACE_INCLUDE_PATH
#define TRACE_INCLUDE_PATH .
#include <trace/define_trace.h>

View File

@@ -0,0 +1,923 @@
/* Copyright (c) 2012-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include "ipa_i.h"
#include <linux/delay.h>
#define IPA_RAM_UC_SMEM_SIZE 128
#define IPA_HW_INTERFACE_VERSION 0x0111
#define IPA_PKT_FLUSH_TO_US 100
#define IPA_UC_POLL_SLEEP_USEC 100
#define IPA_UC_POLL_MAX_RETRY 10000
#define HOLB_WORKQUEUE_NAME "ipa_holb_wq"
static struct workqueue_struct *ipa_holb_wq;
static void ipa_start_monitor_holb(struct work_struct *work);
static DECLARE_WORK(ipa_holb_work, ipa_start_monitor_holb);
/**
* enum ipa_cpu_2_hw_commands - Values that represent the commands from the CPU
* IPA_CPU_2_HW_CMD_NO_OP : No operation is required.
* IPA_CPU_2_HW_CMD_UPDATE_FLAGS : Update SW flags which defines the behavior
* of HW.
* IPA_CPU_2_HW_CMD_DEBUG_RUN_TEST : Launch predefined test over HW.
* IPA_CPU_2_HW_CMD_DEBUG_GET_INFO : Read HW internal debug information.
* IPA_CPU_2_HW_CMD_ERR_FATAL : CPU instructs HW to perform error fatal
* handling.
* IPA_CPU_2_HW_CMD_CLK_GATE : CPU instructs HW to goto Clock Gated state.
* IPA_CPU_2_HW_CMD_CLK_UNGATE : CPU instructs HW to goto Clock Ungated state.
* IPA_CPU_2_HW_CMD_MEMCPY : CPU instructs HW to do memcopy using QMB.
* IPA_CPU_2_HW_CMD_RESET_PIPE : Command to reset a pipe - SW WA for a HW bug.
*/
enum ipa_cpu_2_hw_commands {
IPA_CPU_2_HW_CMD_NO_OP =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 0),
IPA_CPU_2_HW_CMD_UPDATE_FLAGS =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 1),
IPA_CPU_2_HW_CMD_DEBUG_RUN_TEST =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 2),
IPA_CPU_2_HW_CMD_DEBUG_GET_INFO =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 3),
IPA_CPU_2_HW_CMD_ERR_FATAL =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 4),
IPA_CPU_2_HW_CMD_CLK_GATE =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 5),
IPA_CPU_2_HW_CMD_CLK_UNGATE =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 6),
IPA_CPU_2_HW_CMD_MEMCPY =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 7),
IPA_CPU_2_HW_CMD_RESET_PIPE =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 8),
IPA_CPU_2_HW_CMD_UPDATE_HOLB_MONITORING =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 9),
};
/**
* enum ipa_hw_2_cpu_responses - Values that represent common HW responses
* to CPU commands.
* @IPA_HW_2_CPU_RESPONSE_INIT_COMPLETED : HW shall send this command once
* boot sequence is completed and HW is ready to serve commands from CPU
* @IPA_HW_2_CPU_RESPONSE_CMD_COMPLETED: Response to CPU commands
*/
enum ipa_hw_2_cpu_responses {
IPA_HW_2_CPU_RESPONSE_INIT_COMPLETED =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 1),
IPA_HW_2_CPU_RESPONSE_CMD_COMPLETED =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 2),
};
/**
* enum ipa_hw_2_cpu_events - Values that represent HW event to be sent to CPU.
* @IPA_HW_2_CPU_EVENT_ERROR : Event specify a system error is detected by the
* device
* @IPA_HW_2_CPU_EVENT_LOG_INFO : Event providing logging specific information
*/
enum ipa_hw_2_cpu_events {
IPA_HW_2_CPU_EVENT_ERROR =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 1),
IPA_HW_2_CPU_EVENT_LOG_INFO =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 2),
};
/**
* enum ipa_hw_errors - Common error types.
* @IPA_HW_ERROR_NONE : No error persists
* @IPA_HW_INVALID_DOORBELL_ERROR : Invalid data read from doorbell
* @IPA_HW_DMA_ERROR : Unexpected DMA error
* @IPA_HW_FATAL_SYSTEM_ERROR : HW has crashed and requires reset.
* @IPA_HW_INVALID_OPCODE : Invalid opcode sent
* @IPA_HW_ZIP_ENGINE_ERROR : ZIP engine error
*/
enum ipa_hw_errors {
IPA_HW_ERROR_NONE =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 0),
IPA_HW_INVALID_DOORBELL_ERROR =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 1),
IPA_HW_DMA_ERROR =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 2),
IPA_HW_FATAL_SYSTEM_ERROR =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 3),
IPA_HW_INVALID_OPCODE =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 4),
IPA_HW_ZIP_ENGINE_ERROR =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 5)
};
/**
* struct IpaHwResetPipeCmdData_t - Structure holding the parameters
* for IPA_CPU_2_HW_CMD_MEMCPY command.
*
* The parameters are passed as immediate params in the shared memory
*/
struct IpaHwMemCopyData_t {
u32 destination_addr;
u32 source_addr;
u32 dest_buffer_size;
u32 source_buffer_size;
};
/**
* union IpaHwResetPipeCmdData_t - Structure holding the parameters
* for IPA_CPU_2_HW_CMD_RESET_PIPE command.
* @pipeNum : Pipe number to be reset
* @direction : 1 - IPA Producer, 0 - IPA Consumer
* @reserved_02_03 : Reserved
*
* The parameters are passed as immediate params in the shared memory
*/
union IpaHwResetPipeCmdData_t {
struct IpaHwResetPipeCmdParams_t {
u8 pipeNum;
u8 direction;
u32 reserved_02_03;
} __packed params;
u32 raw32b;
} __packed;
/**
* union IpaHwmonitorHolbCmdData_t - Structure holding the parameters
* for IPA_CPU_2_HW_CMD_UPDATE_HOLB_MONITORING command.
* @monitorPipe : Indication whether to monitor the pipe. 0 – Do not Monitor
* Pipe, 1 – Monitor Pipe
* @pipeNum : Pipe to be monitored/not monitored
* @reserved_02_03 : Reserved
*
* The parameters are passed as immediate params in the shared memory
*/
union IpaHwmonitorHolbCmdData_t {
struct IpaHwmonitorHolbCmdParams_t {
u8 monitorPipe;
u8 pipeNum;
u32 reserved_02_03:16;
} __packed params;
u32 raw32b;
} __packed;
/**
* union IpaHwCpuCmdCompletedResponseData_t - Structure holding the parameters
* for IPA_HW_2_CPU_RESPONSE_CMD_COMPLETED response.
* @originalCmdOp : The original command opcode
* @status : 0 for success indication, otherwise failure
* @reserved : Reserved
*
* Parameters are sent as 32b immediate parameters.
*/
union IpaHwCpuCmdCompletedResponseData_t {
struct IpaHwCpuCmdCompletedResponseParams_t {
u32 originalCmdOp:8;
u32 status:8;
u32 reserved:16;
} __packed params;
u32 raw32b;
} __packed;
/**
* union IpaHwErrorEventData_t - HW->CPU Common Events
* @errorType : Entered when a system error is detected by the HW. Type of
* error is specified by IPA_HW_ERRORS
* @reserved : Reserved
*/
union IpaHwErrorEventData_t {
struct IpaHwErrorEventParams_t {
u32 errorType:8;
u32 reserved:24;
} __packed params;
u32 raw32b;
} __packed;
/**
* union IpaHwUpdateFlagsCmdData_t - Structure holding the parameters for
* IPA_CPU_2_HW_CMD_UPDATE_FLAGS command
* @newFlags: SW flags defined the behavior of HW.
* This field is expected to be used as bitmask for enum ipa_hw_flags
*/
union IpaHwUpdateFlagsCmdData_t {
struct IpaHwUpdateFlagsCmdParams_t {
u32 newFlags;
} params;
u32 raw32b;
};
struct ipa_uc_hdlrs uc_hdlrs[IPA_HW_NUM_FEATURES] = { { 0 } };
static inline const char *ipa_hw_error_str(enum ipa_hw_errors err_type)
{
const char *str;
switch (err_type) {
case IPA_HW_ERROR_NONE:
str = "IPA_HW_ERROR_NONE";
break;
case IPA_HW_INVALID_DOORBELL_ERROR:
str = "IPA_HW_INVALID_DOORBELL_ERROR";
break;
case IPA_HW_FATAL_SYSTEM_ERROR:
str = "IPA_HW_FATAL_SYSTEM_ERROR";
break;
case IPA_HW_INVALID_OPCODE:
str = "IPA_HW_INVALID_OPCODE";
break;
case IPA_HW_ZIP_ENGINE_ERROR:
str = "IPA_HW_ZIP_ENGINE_ERROR";
break;
default:
str = "INVALID ipa_hw_errors type";
}
return str;
}
static void ipa_log_evt_hdlr(void)
{
int i;
if (!ipa_ctx->uc_ctx.uc_event_top_ofst) {
ipa_ctx->uc_ctx.uc_event_top_ofst =
ipa_ctx->uc_ctx.uc_sram_mmio->eventParams;
if (ipa_ctx->uc_ctx.uc_event_top_ofst +
sizeof(struct IpaHwEventLogInfoData_t) >=
ipa_ctx->ctrl->ipa_reg_base_ofst +
IPA_SRAM_DIRECT_ACCESS_N_OFST_v2_0(0) +
ipa_ctx->smem_sz) {
IPAERR("uc_top 0x%x outside SRAM\n",
ipa_ctx->uc_ctx.uc_event_top_ofst);
goto bad_uc_top_ofst;
}
ipa_ctx->uc_ctx.uc_event_top_mmio = ioremap(
ipa_ctx->ipa_wrapper_base +
ipa_ctx->uc_ctx.uc_event_top_ofst,
sizeof(struct IpaHwEventLogInfoData_t));
if (!ipa_ctx->uc_ctx.uc_event_top_mmio) {
IPAERR("fail to ioremap uc top\n");
goto bad_uc_top_ofst;
}
for (i = 0; i < IPA_HW_NUM_FEATURES; i++) {
if (uc_hdlrs[i].ipa_uc_event_log_info_hdlr)
uc_hdlrs[i].ipa_uc_event_log_info_hdlr
(ipa_ctx->uc_ctx.uc_event_top_mmio);
}
} else {
if (ipa_ctx->uc_ctx.uc_sram_mmio->eventParams !=
ipa_ctx->uc_ctx.uc_event_top_ofst) {
IPAERR("uc top ofst changed new=%u cur=%u\n",
ipa_ctx->uc_ctx.uc_sram_mmio->
eventParams,
ipa_ctx->uc_ctx.uc_event_top_ofst);
}
}
return;
bad_uc_top_ofst:
ipa_ctx->uc_ctx.uc_event_top_ofst = 0;
}
/**
* ipa2_uc_state_check() - Check the status of the uC interface
*
* Return value: 0 if the uC is loaded, interface is initialized
* and there was no recent failure in one of the commands.
* A negative value is returned otherwise.
*/
int ipa2_uc_state_check(void)
{
if (!ipa_ctx->uc_ctx.uc_inited) {
IPAERR("uC interface not initialized\n");
return -EFAULT;
}
if (!ipa_ctx->uc_ctx.uc_loaded) {
IPAERR("uC is not loaded\n");
return -EFAULT;
}
if (ipa_ctx->uc_ctx.uc_failed) {
IPAERR("uC has failed its last command\n");
return -EFAULT;
}
return 0;
}
EXPORT_SYMBOL(ipa2_uc_state_check);
/**
* ipa_uc_loaded_check() - Check the uC has been loaded
*
* Return value: 1 if the uC is loaded, 0 otherwise
*/
int ipa_uc_loaded_check(void)
{
return ipa_ctx->uc_ctx.uc_loaded;
}
EXPORT_SYMBOL(ipa_uc_loaded_check);
static void ipa_uc_event_handler(enum ipa_irq_type interrupt,
void *private_data,
void *interrupt_data)
{
union IpaHwErrorEventData_t evt;
u8 feature;
WARN_ON(private_data != ipa_ctx);
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
IPADBG("uC evt opcode=%u\n",
ipa_ctx->uc_ctx.uc_sram_mmio->eventOp);
feature = EXTRACT_UC_FEATURE(ipa_ctx->uc_ctx.uc_sram_mmio->eventOp);
if (0 > feature || IPA_HW_FEATURE_MAX <= feature) {
IPAERR("Invalid feature %u for event %u\n",
feature, ipa_ctx->uc_ctx.uc_sram_mmio->eventOp);
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
return;
}
/* Feature specific handling */
if (uc_hdlrs[feature].ipa_uc_event_hdlr)
uc_hdlrs[feature].ipa_uc_event_hdlr
(ipa_ctx->uc_ctx.uc_sram_mmio);
/* General handling */
if (ipa_ctx->uc_ctx.uc_sram_mmio->eventOp ==
IPA_HW_2_CPU_EVENT_ERROR) {
evt.raw32b = ipa_ctx->uc_ctx.uc_sram_mmio->eventParams;
IPAERR("uC Error, evt errorType = %s\n",
ipa_hw_error_str(evt.params.errorType));
ipa_ctx->uc_ctx.uc_failed = true;
ipa_ctx->uc_ctx.uc_error_type = evt.params.errorType;
if (evt.params.errorType == IPA_HW_ZIP_ENGINE_ERROR) {
IPAERR("IPA has encountered a ZIP engine error\n");
ipa_ctx->uc_ctx.uc_zip_error = true;
}
BUG();
} else if (ipa_ctx->uc_ctx.uc_sram_mmio->eventOp ==
IPA_HW_2_CPU_EVENT_LOG_INFO) {
IPADBG("uC evt log info ofst=0x%x\n",
ipa_ctx->uc_ctx.uc_sram_mmio->eventParams);
ipa_log_evt_hdlr();
} else {
IPADBG("unsupported uC evt opcode=%u\n",
ipa_ctx->uc_ctx.uc_sram_mmio->eventOp);
}
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
}
static int ipa_uc_panic_notifier(struct notifier_block *this,
unsigned long event, void *ptr)
{
int result = 0;
struct ipa_active_client_logging_info log_info;
IPADBG("this=%p evt=%lu ptr=%p\n", this, event, ptr);
result = ipa2_uc_state_check();
if (result)
goto fail;
IPA_ACTIVE_CLIENTS_PREP_SIMPLE(log_info);
if (ipa2_inc_client_enable_clks_no_block(&log_info))
goto fail;
ipa_ctx->uc_ctx.uc_sram_mmio->cmdOp =
IPA_CPU_2_HW_CMD_ERR_FATAL;
/* ensure write to shared memory is done before triggering uc */
wmb();
ipa_write_reg(ipa_ctx->mmio, IPA_IRQ_EE_UC_n_OFFS(0), 0x1);
/* give uc enough time to save state */
udelay(IPA_PKT_FLUSH_TO_US);
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
IPADBG("err_fatal issued\n");
fail:
return NOTIFY_DONE;
}
static struct notifier_block ipa_uc_panic_blk = {
.notifier_call = ipa_uc_panic_notifier,
};
void ipa_register_panic_hdlr(void)
{
atomic_notifier_chain_register(&panic_notifier_list,
&ipa_uc_panic_blk);
}
static void ipa_uc_response_hdlr(enum ipa_irq_type interrupt,
void *private_data,
void *interrupt_data)
{
union IpaHwCpuCmdCompletedResponseData_t uc_rsp;
u8 feature;
int res;
int i;
WARN_ON(private_data != ipa_ctx);
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
IPADBG("uC rsp opcode=%u\n",
ipa_ctx->uc_ctx.uc_sram_mmio->responseOp);
feature = EXTRACT_UC_FEATURE(ipa_ctx->uc_ctx.uc_sram_mmio->responseOp);
if (0 > feature || IPA_HW_FEATURE_MAX <= feature) {
IPAERR("Invalid feature %u for event %u\n",
feature, ipa_ctx->uc_ctx.uc_sram_mmio->eventOp);
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
return;
}
/* Feature specific handling */
if (uc_hdlrs[feature].ipa_uc_response_hdlr) {
res = uc_hdlrs[feature].ipa_uc_response_hdlr(
ipa_ctx->uc_ctx.uc_sram_mmio,
&ipa_ctx->uc_ctx.uc_status);
if (res == 0) {
IPADBG("feature %d specific response handler\n",
feature);
complete_all(&ipa_ctx->uc_ctx.uc_completion);
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
return;
}
}
/* General handling */
if (ipa_ctx->uc_ctx.uc_sram_mmio->responseOp ==
IPA_HW_2_CPU_RESPONSE_INIT_COMPLETED) {
ipa_ctx->uc_ctx.uc_loaded = true;
IPAERR("IPA uC loaded\n");
/*
* The proxy vote is held until uC is loaded to ensure that
* IPA_HW_2_CPU_RESPONSE_INIT_COMPLETED is received.
*/
ipa2_proxy_clk_unvote();
for (i = 0; i < IPA_HW_NUM_FEATURES; i++) {
if (uc_hdlrs[i].ipa_uc_loaded_hdlr)
uc_hdlrs[i].ipa_uc_loaded_hdlr();
}
/* Queue the work to enable holb monitoring on IPA-USB Producer
* pipe if valid.
*/
if (ipa_ctx->ipa_hw_type == IPA_HW_v2_6L)
queue_work(ipa_holb_wq, &ipa_holb_work);
} else if (ipa_ctx->uc_ctx.uc_sram_mmio->responseOp ==
IPA_HW_2_CPU_RESPONSE_CMD_COMPLETED) {
uc_rsp.raw32b = ipa_ctx->uc_ctx.uc_sram_mmio->responseParams;
IPADBG("uC cmd response opcode=%u status=%u\n",
uc_rsp.params.originalCmdOp,
uc_rsp.params.status);
if (uc_rsp.params.originalCmdOp ==
ipa_ctx->uc_ctx.pending_cmd) {
ipa_ctx->uc_ctx.uc_status = uc_rsp.params.status;
complete_all(&ipa_ctx->uc_ctx.uc_completion);
} else {
IPAERR("Expected cmd=%u rcvd cmd=%u\n",
ipa_ctx->uc_ctx.pending_cmd,
uc_rsp.params.originalCmdOp);
}
} else {
IPAERR("Unsupported uC rsp opcode = %u\n",
ipa_ctx->uc_ctx.uc_sram_mmio->responseOp);
}
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
}
/**
* ipa_uc_interface_init() - Initialize the interface with the uC
*
* Return value: 0 on success, negative value otherwise
*/
int ipa_uc_interface_init(void)
{
int result;
unsigned long phys_addr;
if (ipa_ctx->uc_ctx.uc_inited) {
IPADBG("uC interface already initialized\n");
return 0;
}
ipa_holb_wq = create_singlethread_workqueue(
HOLB_WORKQUEUE_NAME);
if (!ipa_holb_wq) {
IPAERR("HOLB workqueue creation failed\n");
return -ENOMEM;
}
mutex_init(&ipa_ctx->uc_ctx.uc_lock);
if (ipa_ctx->ipa_hw_type >= IPA_HW_v2_5) {
phys_addr = ipa_ctx->ipa_wrapper_base +
ipa_ctx->ctrl->ipa_reg_base_ofst +
IPA_SRAM_SW_FIRST_v2_5;
} else {
phys_addr = ipa_ctx->ipa_wrapper_base +
ipa_ctx->ctrl->ipa_reg_base_ofst +
IPA_SRAM_DIRECT_ACCESS_N_OFST_v2_0(
ipa_ctx->smem_restricted_bytes / 4);
}
ipa_ctx->uc_ctx.uc_sram_mmio = ioremap(phys_addr,
IPA_RAM_UC_SMEM_SIZE);
if (!ipa_ctx->uc_ctx.uc_sram_mmio) {
IPAERR("Fail to ioremap IPA uC SRAM\n");
result = -ENOMEM;
goto remap_fail;
}
result = ipa2_add_interrupt_handler(IPA_UC_IRQ_0,
ipa_uc_event_handler, true,
ipa_ctx);
if (result) {
IPAERR("Fail to register for UC_IRQ0 rsp interrupt\n");
result = -EFAULT;
goto irq_fail0;
}
result = ipa2_add_interrupt_handler(IPA_UC_IRQ_1,
ipa_uc_response_hdlr, true,
ipa_ctx);
if (result) {
IPAERR("fail to register for UC_IRQ1 rsp interrupt\n");
result = -EFAULT;
goto irq_fail1;
}
ipa_ctx->uc_ctx.uc_inited = true;
IPADBG("IPA uC interface is initialized\n");
return 0;
irq_fail1:
ipa2_remove_interrupt_handler(IPA_UC_IRQ_0);
irq_fail0:
iounmap(ipa_ctx->uc_ctx.uc_sram_mmio);
remap_fail:
return result;
}
EXPORT_SYMBOL(ipa_uc_interface_init);
/**
* ipa_uc_send_cmd() - Send a command to the uC
*
* Note: In case the operation times out (No response from the uC) or
* polling maximal amount of retries has reached, the logic
* considers it as an invalid state of the uC/IPA, and
* issues a kernel panic.
*
* Returns: 0 on success.
* -EINVAL in case of invalid input.
* -EBADF in case uC interface is not initialized /
* or the uC has failed previously.
* -EFAULT in case the received status doesn't match
* the expected.
*/
int ipa_uc_send_cmd(u32 cmd, u32 opcode, u32 expected_status,
bool polling_mode, unsigned long timeout_jiffies)
{
int index;
union IpaHwCpuCmdCompletedResponseData_t uc_rsp;
mutex_lock(&ipa_ctx->uc_ctx.uc_lock);
if (ipa2_uc_state_check()) {
IPADBG("uC send command aborted\n");
mutex_unlock(&ipa_ctx->uc_ctx.uc_lock);
return -EBADF;
}
init_completion(&ipa_ctx->uc_ctx.uc_completion);
ipa_ctx->uc_ctx.uc_sram_mmio->cmdParams = cmd;
ipa_ctx->uc_ctx.uc_sram_mmio->cmdOp = opcode;
ipa_ctx->uc_ctx.pending_cmd = opcode;
ipa_ctx->uc_ctx.uc_sram_mmio->responseOp = 0;
ipa_ctx->uc_ctx.uc_sram_mmio->responseParams = 0;
ipa_ctx->uc_ctx.uc_status = 0;
/* ensure write to shared memory is done before triggering uc */
wmb();
ipa_write_reg(ipa_ctx->mmio, IPA_IRQ_EE_UC_n_OFFS(0), 0x1);
if (polling_mode) {
for (index = 0; index < IPA_UC_POLL_MAX_RETRY; index++) {
if (ipa_ctx->uc_ctx.uc_sram_mmio->responseOp ==
IPA_HW_2_CPU_RESPONSE_CMD_COMPLETED) {
uc_rsp.raw32b = ipa_ctx->uc_ctx.uc_sram_mmio->
responseParams;
if (uc_rsp.params.originalCmdOp ==
ipa_ctx->uc_ctx.pending_cmd) {
ipa_ctx->uc_ctx.pending_cmd = -1;
break;
}
}
usleep_range(IPA_UC_POLL_SLEEP_USEC,
IPA_UC_POLL_SLEEP_USEC);
}
if (index == IPA_UC_POLL_MAX_RETRY) {
IPAERR("uC max polling retries reached\n");
if (ipa_ctx->uc_ctx.uc_failed) {
IPAERR("uC reported on Error, errorType = %s\n",
ipa_hw_error_str(ipa_ctx->
uc_ctx.uc_error_type));
}
mutex_unlock(&ipa_ctx->uc_ctx.uc_lock);
BUG();
return -EFAULT;
}
} else {
if (wait_for_completion_timeout(&ipa_ctx->uc_ctx.uc_completion,
timeout_jiffies) == 0) {
IPAERR("uC timed out\n");
if (ipa_ctx->uc_ctx.uc_failed) {
IPAERR("uC reported on Error, errorType = %s\n",
ipa_hw_error_str(ipa_ctx->
uc_ctx.uc_error_type));
}
mutex_unlock(&ipa_ctx->uc_ctx.uc_lock);
BUG();
return -EFAULT;
}
}
if (ipa_ctx->uc_ctx.uc_status != expected_status) {
IPAERR("Recevied status %u, Expected status %u\n",
ipa_ctx->uc_ctx.uc_status, expected_status);
ipa_ctx->uc_ctx.pending_cmd = -1;
mutex_unlock(&ipa_ctx->uc_ctx.uc_lock);
return -EFAULT;
}
ipa_ctx->uc_ctx.pending_cmd = -1;
mutex_unlock(&ipa_ctx->uc_ctx.uc_lock);
IPADBG("uC cmd %u send succeeded\n", opcode);
return 0;
}
EXPORT_SYMBOL(ipa_uc_send_cmd);
/**
* ipa_uc_register_handlers() - Registers event, response and log event
* handlers for a specific feature.Please note
* that currently only one handler can be
* registered per feature.
*
* Return value: None
*/
void ipa_uc_register_handlers(enum ipa_hw_features feature,
struct ipa_uc_hdlrs *hdlrs)
{
if (0 > feature || IPA_HW_FEATURE_MAX <= feature) {
IPAERR("Feature %u is invalid, not registering hdlrs\n",
feature);
return;
}
mutex_lock(&ipa_ctx->uc_ctx.uc_lock);
uc_hdlrs[feature] = *hdlrs;
mutex_unlock(&ipa_ctx->uc_ctx.uc_lock);
IPADBG("uC handlers registered for feature %u\n", feature);
}
EXPORT_SYMBOL(ipa_uc_register_handlers);
/**
* ipa_uc_reset_pipe() - reset a BAM pipe using the uC interface
* @ipa_client: [in] ipa client handle representing the pipe
*
* The function uses the uC interface in order to issue a BAM
* PIPE reset request. The uC makes sure there's no traffic in
* the TX command queue before issuing the reset.
*
* Returns: 0 on success, negative on failure
*/
int ipa_uc_reset_pipe(enum ipa_client_type ipa_client)
{
union IpaHwResetPipeCmdData_t cmd;
int ep_idx;
int ret;
ep_idx = ipa2_get_ep_mapping(ipa_client);
if (ep_idx == -1) {
IPAERR("Invalid IPA client\n");
return 0;
}
/*
* If the uC interface has not been initialized yet,
* continue with the sequence without resetting the
* pipe.
*/
if (ipa2_uc_state_check()) {
IPADBG("uC interface will not be used to reset %s pipe %d\n",
IPA_CLIENT_IS_PROD(ipa_client) ? "CONS" : "PROD",
ep_idx);
return 0;
}
/*
* IPA consumer = 0, IPA producer = 1.
* IPA driver concept of PROD/CONS is the opposite of the
* IPA HW concept. Therefore, IPA AP CLIENT PRODUCER = IPA CONSUMER,
* and vice-versa.
*/
cmd.params.direction = (u8)(IPA_CLIENT_IS_PROD(ipa_client) ? 0 : 1);
cmd.params.pipeNum = (u8)ep_idx;
IPADBG("uC pipe reset on IPA %s pipe %d\n",
IPA_CLIENT_IS_PROD(ipa_client) ? "CONS" : "PROD", ep_idx);
ret = ipa_uc_send_cmd(cmd.raw32b, IPA_CPU_2_HW_CMD_RESET_PIPE, 0,
false, 10*HZ);
return ret;
}
EXPORT_SYMBOL(ipa_uc_reset_pipe);
/**
* ipa_uc_monitor_holb() - Enable/Disable holb monitoring of a producer pipe.
* @ipa_client: [in] ipa client handle representing the pipe
*
* The function uses the uC interface in order to disable/enable holb
* monitoring.
*
* Returns: 0 on success, negative on failure
*/
int ipa_uc_monitor_holb(enum ipa_client_type ipa_client, bool enable)
{
union IpaHwmonitorHolbCmdData_t cmd;
int ep_idx;
int ret;
/* HOLB monitoring is applicable only to 2.6L. */
if (ipa_ctx->ipa_hw_type != IPA_HW_v2_6L) {
IPADBG("Not applicable on this target\n");
return 0;
}
ep_idx = ipa2_get_ep_mapping(ipa_client);
if (ep_idx == -1) {
IPAERR("Invalid IPA client\n");
return 0;
}
/*
* If the uC interface has not been initialized yet,
* continue with the sequence without resetting the
* pipe.
*/
if (ipa2_uc_state_check()) {
IPADBG("uC interface will not be used to reset %s pipe %d\n",
IPA_CLIENT_IS_PROD(ipa_client) ? "CONS" : "PROD",
ep_idx);
return 0;
}
/*
* IPA consumer = 0, IPA producer = 1.
* IPA driver concept of PROD/CONS is the opposite of the
* IPA HW concept. Therefore, IPA AP CLIENT PRODUCER = IPA CONSUMER,
* and vice-versa.
*/
cmd.params.monitorPipe = (u8)(enable ? 1 : 0);
cmd.params.pipeNum = (u8)ep_idx;
IPADBG("uC holb monitoring on IPA pipe %d, Enable: %d\n",
ep_idx, enable);
ret = ipa_uc_send_cmd(cmd.raw32b,
IPA_CPU_2_HW_CMD_UPDATE_HOLB_MONITORING, 0,
false, 10*HZ);
return ret;
}
EXPORT_SYMBOL(ipa_uc_monitor_holb);
/**
* ipa_start_monitor_holb() - Send HOLB command to monitor IPA-USB
* producer pipe.
*
* This function is called after uc is loaded to start monitoring
* IPA pipe towrds USB in case if USB is already connected.
*
* Return codes:
* None
*/
static void ipa_start_monitor_holb(struct work_struct *work)
{
IPADBG("starting holb monitoring on IPA_CLIENT_USB_CONS\n");
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
ipa_uc_monitor_holb(IPA_CLIENT_USB_CONS, true);
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
}
/**
* ipa_uc_notify_clk_state() - notify to uC of clock enable / disable
* @enabled: true if clock are enabled
*
* The function uses the uC interface in order to notify uC before IPA clocks
* are disabled to make sure uC is not in the middle of operation.
* Also after clocks are enabled ned to notify uC to start processing.
*
* Returns: 0 on success, negative on failure
*/
int ipa_uc_notify_clk_state(bool enabled)
{
u32 opcode;
/*
* If the uC interface has not been initialized yet,
* don't notify the uC on the enable/disable
*/
if (ipa2_uc_state_check()) {
IPADBG("uC interface will not notify the UC on clock state\n");
return 0;
}
IPADBG("uC clock %s notification\n", (enabled) ? "UNGATE" : "GATE");
opcode = (enabled) ? IPA_CPU_2_HW_CMD_CLK_UNGATE :
IPA_CPU_2_HW_CMD_CLK_GATE;
return ipa_uc_send_cmd(0, opcode, 0, true, 0);
}
EXPORT_SYMBOL(ipa_uc_notify_clk_state);
/**
* ipa_uc_update_hw_flags() - send uC the HW flags to be used
* @flags: This field is expected to be used as bitmask for enum ipa_hw_flags
*
* Returns: 0 on success, negative on failure
*/
int ipa_uc_update_hw_flags(u32 flags)
{
union IpaHwUpdateFlagsCmdData_t cmd;
memset(&cmd, 0, sizeof(cmd));
cmd.params.newFlags = flags;
return ipa_uc_send_cmd(cmd.raw32b, IPA_CPU_2_HW_CMD_UPDATE_FLAGS, 0,
false, HZ);
}
EXPORT_SYMBOL(ipa_uc_update_hw_flags);
/**
* ipa_uc_memcpy() - Perform a memcpy action using IPA uC
* @dest: physical address to store the copied data.
* @src: physical address of the source data to copy.
* @len: number of bytes to copy.
*
* Returns: 0 on success, negative on failure
*/
int ipa_uc_memcpy(phys_addr_t dest, phys_addr_t src, int len)
{
int res;
struct ipa_mem_buffer mem;
struct IpaHwMemCopyData_t *cmd;
IPADBG("dest 0x%pa src 0x%pa len %d\n", &dest, &src, len);
mem.size = sizeof(cmd);
mem.base = dma_alloc_coherent(ipa_ctx->pdev, mem.size, &mem.phys_base,
GFP_KERNEL);
if (!mem.base) {
IPAERR("fail to alloc DMA buff of size %d\n", mem.size);
return -ENOMEM;
}
cmd = (struct IpaHwMemCopyData_t *)mem.base;
memset(cmd, 0, sizeof(*cmd));
cmd->destination_addr = dest;
cmd->dest_buffer_size = len;
cmd->source_addr = src;
cmd->source_buffer_size = len;
res = ipa_uc_send_cmd((u32)mem.phys_base, IPA_CPU_2_HW_CMD_MEMCPY, 0,
true, 10 * HZ);
if (res) {
IPAERR("ipa_uc_send_cmd failed %d\n", res);
goto free_coherent;
}
res = 0;
free_coherent:
dma_free_coherent(ipa_ctx->pdev, mem.size, mem.base, mem.phys_base);
return res;
}

View File

@@ -0,0 +1,966 @@
/* Copyright (c) 2015-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/ipa.h>
#include "ipa_i.h"
/* MHI uC interface definitions */
#define IPA_HW_INTERFACE_MHI_VERSION 0x0004
#define IPA_HW_MAX_NUMBER_OF_CHANNELS 2
#define IPA_HW_MAX_NUMBER_OF_EVENTRINGS 2
#define IPA_HW_MAX_CHANNEL_HANDLE (IPA_HW_MAX_NUMBER_OF_CHANNELS-1)
/**
* Values that represent the MHI commands from CPU to IPA HW.
* @IPA_CPU_2_HW_CMD_MHI_INIT: Initialize HW to be ready for MHI processing.
* Once operation was completed HW shall respond with
* IPA_HW_2_CPU_RESPONSE_CMD_COMPLETED.
* @IPA_CPU_2_HW_CMD_MHI_INIT_CHANNEL: Initialize specific channel to be ready
* to serve MHI transfers. Once initialization was completed HW shall
* respond with IPA_HW_2_CPU_RESPONSE_MHI_CHANGE_CHANNEL_STATE.
* IPA_HW_MHI_CHANNEL_STATE_ENABLE
* @IPA_CPU_2_HW_CMD_MHI_UPDATE_MSI: Update MHI MSI interrupts data.
* Once operation was completed HW shall respond with
* IPA_HW_2_CPU_RESPONSE_CMD_COMPLETED.
* @IPA_CPU_2_HW_CMD_MHI_CHANGE_CHANNEL_STATE: Change specific channel
* processing state following host request. Once operation was completed
* HW shall respond with IPA_HW_2_CPU_RESPONSE_MHI_CHANGE_CHANNEL_STATE.
* @IPA_CPU_2_HW_CMD_MHI_DL_UL_SYNC_INFO: Info related to DL UL syncronization.
* @IPA_CPU_2_HW_CMD_MHI_STOP_EVENT_UPDATE: Cmd to stop event ring processing.
*/
enum ipa_cpu_2_hw_mhi_commands {
IPA_CPU_2_HW_CMD_MHI_INIT
= FEATURE_ENUM_VAL(IPA_HW_FEATURE_MHI, 0),
IPA_CPU_2_HW_CMD_MHI_INIT_CHANNEL
= FEATURE_ENUM_VAL(IPA_HW_FEATURE_MHI, 1),
IPA_CPU_2_HW_CMD_MHI_UPDATE_MSI
= FEATURE_ENUM_VAL(IPA_HW_FEATURE_MHI, 2),
IPA_CPU_2_HW_CMD_MHI_CHANGE_CHANNEL_STATE
= FEATURE_ENUM_VAL(IPA_HW_FEATURE_MHI, 3),
IPA_CPU_2_HW_CMD_MHI_DL_UL_SYNC_INFO
= FEATURE_ENUM_VAL(IPA_HW_FEATURE_MHI, 4),
IPA_CPU_2_HW_CMD_MHI_STOP_EVENT_UPDATE
= FEATURE_ENUM_VAL(IPA_HW_FEATURE_MHI, 5)
};
/**
* Values that represent MHI related HW responses to CPU commands.
* @IPA_HW_2_CPU_RESPONSE_MHI_CHANGE_CHANNEL_STATE: Response to
* IPA_CPU_2_HW_CMD_MHI_INIT_CHANNEL or
* IPA_CPU_2_HW_CMD_MHI_CHANGE_CHANNEL_STATE commands.
*/
enum ipa_hw_2_cpu_mhi_responses {
IPA_HW_2_CPU_RESPONSE_MHI_CHANGE_CHANNEL_STATE
= FEATURE_ENUM_VAL(IPA_HW_FEATURE_MHI, 0),
};
/**
* Values that represent MHI related HW event to be sent to CPU.
* @IPA_HW_2_CPU_EVENT_MHI_CHANNEL_ERROR: Event specify the device detected an
* error in an element from the transfer ring associated with the channel
* @IPA_HW_2_CPU_EVENT_MHI_CHANNEL_WAKE_UP_REQUEST: Event specify a bam
* interrupt was asserted when MHI engine is suspended
*/
enum ipa_hw_2_cpu_mhi_events {
IPA_HW_2_CPU_EVENT_MHI_CHANNEL_ERROR
= FEATURE_ENUM_VAL(IPA_HW_FEATURE_MHI, 0),
IPA_HW_2_CPU_EVENT_MHI_CHANNEL_WAKE_UP_REQUEST
= FEATURE_ENUM_VAL(IPA_HW_FEATURE_MHI, 1),
};
/**
* Channel error types.
* @IPA_HW_CHANNEL_ERROR_NONE: No error persists.
* @IPA_HW_CHANNEL_INVALID_RE_ERROR: Invalid Ring Element was detected
*/
enum ipa_hw_channel_errors {
IPA_HW_CHANNEL_ERROR_NONE,
IPA_HW_CHANNEL_INVALID_RE_ERROR
};
/**
* MHI error types.
* @IPA_HW_INVALID_MMIO_ERROR: Invalid data read from MMIO space
* @IPA_HW_INVALID_CHANNEL_ERROR: Invalid data read from channel context array
* @IPA_HW_INVALID_EVENT_ERROR: Invalid data read from event ring context array
* @IPA_HW_NO_ED_IN_RING_ERROR: No event descriptors are available to report on
* secondary event ring
* @IPA_HW_LINK_ERROR: Link error
*/
enum ipa_hw_mhi_errors {
IPA_HW_INVALID_MMIO_ERROR
= FEATURE_ENUM_VAL(IPA_HW_FEATURE_MHI, 0),
IPA_HW_INVALID_CHANNEL_ERROR
= FEATURE_ENUM_VAL(IPA_HW_FEATURE_MHI, 1),
IPA_HW_INVALID_EVENT_ERROR
= FEATURE_ENUM_VAL(IPA_HW_FEATURE_MHI, 2),
IPA_HW_NO_ED_IN_RING_ERROR
= FEATURE_ENUM_VAL(IPA_HW_FEATURE_MHI, 4),
IPA_HW_LINK_ERROR
= FEATURE_ENUM_VAL(IPA_HW_FEATURE_MHI, 5),
};
/**
* Structure referring to the common and MHI section of 128B shared memory
* located in offset zero of SW Partition in IPA SRAM.
* The shared memory is used for communication between IPA HW and CPU.
* @common: common section in IPA SRAM
* @interfaceVersionMhi: The MHI interface version as reported by HW
* @mhiState: Overall MHI state
* @reserved_2B: reserved
* @mhiCnl0State: State of MHI channel 0.
* The state carries information regarding the error type.
* See IPA_HW_MHI_CHANNEL_STATES.
* @mhiCnl0State: State of MHI channel 1.
* @mhiCnl0State: State of MHI channel 2.
* @mhiCnl0State: State of MHI channel 3
* @mhiCnl0State: State of MHI channel 4.
* @mhiCnl0State: State of MHI channel 5.
* @mhiCnl0State: State of MHI channel 6.
* @mhiCnl0State: State of MHI channel 7.
* @reserved_37_34: reserved
* @reserved_3B_38: reserved
* @reserved_3F_3C: reserved
*/
struct IpaHwSharedMemMhiMapping_t {
struct IpaHwSharedMemCommonMapping_t common;
u16 interfaceVersionMhi;
u8 mhiState;
u8 reserved_2B;
u8 mhiCnl0State;
u8 mhiCnl1State;
u8 mhiCnl2State;
u8 mhiCnl3State;
u8 mhiCnl4State;
u8 mhiCnl5State;
u8 mhiCnl6State;
u8 mhiCnl7State;
u32 reserved_37_34;
u32 reserved_3B_38;
u32 reserved_3F_3C;
};
/**
* Structure holding the parameters for IPA_CPU_2_HW_CMD_MHI_INIT command.
* Parameters are sent as pointer thus should be reside in address accessible
* to HW.
* @msiAddress: The MSI base (in device space) used for asserting the interrupt
* (MSI) associated with the event ring
* mmioBaseAddress: The address (in device space) of MMIO structure in
* host space
* deviceMhiCtrlBaseAddress: Base address of the memory region in the device
* address space where the MHI control data structures are allocated by
* the host, including channel context array, event context array,
* and rings. This value is used for host/device address translation.
* deviceMhiDataBaseAddress: Base address of the memory region in the device
* address space where the MHI data buffers are allocated by the host.
* This value is used for host/device address translation.
* firstChannelIndex: First channel ID. Doorbell 0 is mapped to this channel
* firstEventRingIndex: First event ring ID. Doorbell 16 is mapped to this
* event ring.
*/
struct IpaHwMhiInitCmdData_t {
u32 msiAddress;
u32 mmioBaseAddress;
u32 deviceMhiCtrlBaseAddress;
u32 deviceMhiDataBaseAddress;
u32 firstChannelIndex;
u32 firstEventRingIndex;
};
/**
* Structure holding the parameters for IPA_CPU_2_HW_CMD_MHI_INIT_CHANNEL
* command. Parameters are sent as 32b immediate parameters.
* @hannelHandle: The channel identifier as allocated by driver.
* value is within the range 0 to IPA_HW_MAX_CHANNEL_HANDLE
* @contexArrayIndex: Unique index for channels, between 0 and 255. The index is
* used as an index in channel context array structures.
* @bamPipeId: The BAM pipe number for pipe dedicated for this channel
* @channelDirection: The direction of the channel as defined in the channel
* type field (CHTYPE) in the channel context data structure.
* @reserved: reserved.
*/
union IpaHwMhiInitChannelCmdData_t {
struct IpaHwMhiInitChannelCmdParams_t {
u32 channelHandle:8;
u32 contexArrayIndex:8;
u32 bamPipeId:6;
u32 channelDirection:2;
u32 reserved:8;
} params;
u32 raw32b;
};
/**
* Structure holding the parameters for IPA_CPU_2_HW_CMD_MHI_UPDATE_MSI command.
* @msiAddress_low: The MSI lower base addr (in device space) used for asserting
* the interrupt (MSI) associated with the event ring.
* @msiAddress_hi: The MSI higher base addr (in device space) used for asserting
* the interrupt (MSI) associated with the event ring.
* @msiMask: Mask indicating number of messages assigned by the host to device
* @msiData: Data Pattern to use when generating the MSI
*/
struct IpaHwMhiMsiCmdData_t {
u32 msiAddress_low;
u32 msiAddress_hi;
u32 msiMask;
u32 msiData;
};
/**
* Structure holding the parameters for
* IPA_CPU_2_HW_CMD_MHI_CHANGE_CHANNEL_STATE command.
* Parameters are sent as 32b immediate parameters.
* @requestedState: The requested channel state as was indicated from Host.
* Use IPA_HW_MHI_CHANNEL_STATES to specify the requested state
* @channelHandle: The channel identifier as allocated by driver.
* value is within the range 0 to IPA_HW_MAX_CHANNEL_HANDLE
* @LPTransitionRejected: Indication that low power state transition was
* rejected
* @reserved: reserved
*/
union IpaHwMhiChangeChannelStateCmdData_t {
struct IpaHwMhiChangeChannelStateCmdParams_t {
u32 requestedState:8;
u32 channelHandle:8;
u32 LPTransitionRejected:8;
u32 reserved:8;
} params;
u32 raw32b;
};
/**
* Structure holding the parameters for
* IPA_CPU_2_HW_CMD_MHI_STOP_EVENT_UPDATE command.
* Parameters are sent as 32b immediate parameters.
* @channelHandle: The channel identifier as allocated by driver.
* value is within the range 0 to IPA_HW_MAX_CHANNEL_HANDLE
* @reserved: reserved
*/
union IpaHwMhiStopEventUpdateData_t {
struct IpaHwMhiStopEventUpdateDataParams_t {
u32 channelHandle:8;
u32 reserved:24;
} params;
u32 raw32b;
};
/**
* Structure holding the parameters for
* IPA_HW_2_CPU_RESPONSE_MHI_CHANGE_CHANNEL_STATE response.
* Parameters are sent as 32b immediate parameters.
* @state: The new channel state. In case state is not as requested this is
* error indication for the last command
* @channelHandle: The channel identifier
* @additonalParams: For stop: the number of pending bam descriptors currently
* queued
*/
union IpaHwMhiChangeChannelStateResponseData_t {
struct IpaHwMhiChangeChannelStateResponseParams_t {
u32 state:8;
u32 channelHandle:8;
u32 additonalParams:16;
} params;
u32 raw32b;
};
/**
* Structure holding the parameters for
* IPA_HW_2_CPU_EVENT_MHI_CHANNEL_ERROR event.
* Parameters are sent as 32b immediate parameters.
* @errorType: Type of error - IPA_HW_CHANNEL_ERRORS
* @channelHandle: The channel identifier as allocated by driver.
* value is within the range 0 to IPA_HW_MAX_CHANNEL_HANDLE
* @reserved: reserved
*/
union IpaHwMhiChannelErrorEventData_t {
struct IpaHwMhiChannelErrorEventParams_t {
u32 errorType:8;
u32 channelHandle:8;
u32 reserved:16;
} params;
u32 raw32b;
};
/**
* Structure holding the parameters for
* IPA_HW_2_CPU_EVENT_MHI_CHANNEL_WAKE_UP_REQUEST event.
* Parameters are sent as 32b immediate parameters.
* @channelHandle: The channel identifier as allocated by driver.
* value is within the range 0 to IPA_HW_MAX_CHANNEL_HANDLE
* @reserved: reserved
*/
union IpaHwMhiChannelWakeupEventData_t {
struct IpaHwMhiChannelWakeupEventParams_t {
u32 channelHandle:8;
u32 reserved:24;
} params;
u32 raw32b;
};
/**
* Structure holding the MHI Common statistics
* @numULDLSync: Number of times UL activity trigged due to DL activity
* @numULTimerExpired: Number of times UL Accm Timer expired
*/
struct IpaHwStatsMhiCmnInfoData_t {
u32 numULDLSync;
u32 numULTimerExpired;
u32 numChEvCtxWpRead;
u32 reserved;
};
/**
* Structure holding the MHI Channel statistics
* @doorbellInt: The number of doorbell int
* @reProccesed: The number of ring elements processed
* @bamFifoFull: Number of times Bam Fifo got full
* @bamFifoEmpty: Number of times Bam Fifo got empty
* @bamFifoUsageHigh: Number of times Bam fifo usage went above 75%
* @bamFifoUsageLow: Number of times Bam fifo usage went below 25%
* @bamInt: Number of BAM Interrupts
* @ringFull: Number of times Transfer Ring got full
* @ringEmpty: umber of times Transfer Ring got empty
* @ringUsageHigh: Number of times Transfer Ring usage went above 75%
* @ringUsageLow: Number of times Transfer Ring usage went below 25%
* @delayedMsi: Number of times device triggered MSI to host after
* Interrupt Moderation Timer expiry
* @immediateMsi: Number of times device triggered MSI to host immediately
* @thresholdMsi: Number of times device triggered MSI due to max pending
* events threshold reached
* @numSuspend: Number of times channel was suspended
* @numResume: Number of times channel was suspended
* @num_OOB: Number of times we indicated that we are OOB
* @num_OOB_timer_expiry: Number of times we indicated that we are OOB
* after timer expiry
* @num_OOB_moderation_timer_start: Number of times we started timer after
* sending OOB and hitting OOB again before we processed threshold
* number of packets
* @num_db_mode_evt: Number of times we indicated that we are in Doorbell mode
*/
struct IpaHwStatsMhiCnlInfoData_t {
u32 doorbellInt;
u32 reProccesed;
u32 bamFifoFull;
u32 bamFifoEmpty;
u32 bamFifoUsageHigh;
u32 bamFifoUsageLow;
u32 bamInt;
u32 ringFull;
u32 ringEmpty;
u32 ringUsageHigh;
u32 ringUsageLow;
u32 delayedMsi;
u32 immediateMsi;
u32 thresholdMsi;
u32 numSuspend;
u32 numResume;
u32 num_OOB;
u32 num_OOB_timer_expiry;
u32 num_OOB_moderation_timer_start;
u32 num_db_mode_evt;
};
/**
* Structure holding the MHI statistics
* @mhiCmnStats: Stats pertaining to MHI
* @mhiCnlStats: Stats pertaining to each channel
*/
struct IpaHwStatsMhiInfoData_t {
struct IpaHwStatsMhiCmnInfoData_t mhiCmnStats;
struct IpaHwStatsMhiCnlInfoData_t mhiCnlStats[
IPA_HW_MAX_NUMBER_OF_CHANNELS];
};
/**
* Structure holding the MHI Common Config info
* @isDlUlSyncEnabled: Flag to indicate if DL-UL synchronization is enabled
* @UlAccmVal: Out Channel(UL) accumulation time in ms when DL UL Sync is
* enabled
* @ulMsiEventThreshold: Threshold at which HW fires MSI to host for UL events
* @dlMsiEventThreshold: Threshold at which HW fires MSI to host for DL events
*/
struct IpaHwConfigMhiCmnInfoData_t {
u8 isDlUlSyncEnabled;
u8 UlAccmVal;
u8 ulMsiEventThreshold;
u8 dlMsiEventThreshold;
};
/**
* Structure holding the parameters for MSI info data
* @msiAddress_low: The MSI lower base addr (in device space) used for asserting
* the interrupt (MSI) associated with the event ring.
* @msiAddress_hi: The MSI higher base addr (in device space) used for asserting
* the interrupt (MSI) associated with the event ring.
* @msiMask: Mask indicating number of messages assigned by the host to device
* @msiData: Data Pattern to use when generating the MSI
*/
struct IpaHwConfigMhiMsiInfoData_t {
u32 msiAddress_low;
u32 msiAddress_hi;
u32 msiMask;
u32 msiData;
};
/**
* Structure holding the MHI Channel Config info
* @transferRingSize: The Transfer Ring size in terms of Ring Elements
* @transferRingIndex: The Transfer Ring channel number as defined by host
* @eventRingIndex: The Event Ring Index associated with this Transfer Ring
* @bamPipeIndex: The BAM Pipe associated with this channel
* @isOutChannel: Indication for the direction of channel
* @reserved_0: Reserved byte for maintaining 4byte alignment
* @reserved_1: Reserved byte for maintaining 4byte alignment
*/
struct IpaHwConfigMhiCnlInfoData_t {
u16 transferRingSize;
u8 transferRingIndex;
u8 eventRingIndex;
u8 bamPipeIndex;
u8 isOutChannel;
u8 reserved_0;
u8 reserved_1;
};
/**
* Structure holding the MHI Event Config info
* @msiVec: msi vector to invoke MSI interrupt
* @intmodtValue: Interrupt moderation timer (in milliseconds)
* @eventRingSize: The Event Ring size in terms of Ring Elements
* @eventRingIndex: The Event Ring number as defined by host
* @reserved_0: Reserved byte for maintaining 4byte alignment
* @reserved_1: Reserved byte for maintaining 4byte alignment
* @reserved_2: Reserved byte for maintaining 4byte alignment
*/
struct IpaHwConfigMhiEventInfoData_t {
u32 msiVec;
u16 intmodtValue;
u16 eventRingSize;
u8 eventRingIndex;
u8 reserved_0;
u8 reserved_1;
u8 reserved_2;
};
/**
* Structure holding the MHI Config info
* @mhiCmnCfg: Common Config pertaining to MHI
* @mhiMsiCfg: Config pertaining to MSI config
* @mhiCnlCfg: Config pertaining to each channel
* @mhiEvtCfg: Config pertaining to each event Ring
*/
struct IpaHwConfigMhiInfoData_t {
struct IpaHwConfigMhiCmnInfoData_t mhiCmnCfg;
struct IpaHwConfigMhiMsiInfoData_t mhiMsiCfg;
struct IpaHwConfigMhiCnlInfoData_t mhiCnlCfg[
IPA_HW_MAX_NUMBER_OF_CHANNELS];
struct IpaHwConfigMhiEventInfoData_t mhiEvtCfg[
IPA_HW_MAX_NUMBER_OF_EVENTRINGS];
};
struct ipa_uc_mhi_ctx {
u8 expected_responseOp;
u32 expected_responseParams;
void (*ready_cb)(void);
void (*wakeup_request_cb)(void);
u32 mhi_uc_stats_ofst;
struct IpaHwStatsMhiInfoData_t *mhi_uc_stats_mmio;
};
#define PRINT_COMMON_STATS(x) \
(nBytes += scnprintf(&dbg_buff[nBytes], size - nBytes, \
#x "=0x%x\n", ipa_uc_mhi_ctx->mhi_uc_stats_mmio->mhiCmnStats.x))
#define PRINT_CHANNEL_STATS(ch, x) \
(nBytes += scnprintf(&dbg_buff[nBytes], size - nBytes, \
#x "=0x%x\n", ipa_uc_mhi_ctx->mhi_uc_stats_mmio->mhiCnlStats[ch].x))
struct ipa_uc_mhi_ctx *ipa_uc_mhi_ctx;
static int ipa_uc_mhi_response_hdlr(struct IpaHwSharedMemCommonMapping_t
*uc_sram_mmio, u32 *uc_status)
{
IPADBG("responseOp=%d\n", uc_sram_mmio->responseOp);
if (uc_sram_mmio->responseOp == ipa_uc_mhi_ctx->expected_responseOp &&
uc_sram_mmio->responseParams ==
ipa_uc_mhi_ctx->expected_responseParams) {
*uc_status = 0;
return 0;
}
return -EINVAL;
}
static void ipa_uc_mhi_event_hdlr(struct IpaHwSharedMemCommonMapping_t
*uc_sram_mmio)
{
if (ipa_ctx->uc_ctx.uc_sram_mmio->eventOp ==
IPA_HW_2_CPU_EVENT_MHI_CHANNEL_ERROR) {
union IpaHwMhiChannelErrorEventData_t evt;
IPAERR("Channel error\n");
evt.raw32b = uc_sram_mmio->eventParams;
IPAERR("errorType=%d channelHandle=%d reserved=%d\n",
evt.params.errorType, evt.params.channelHandle,
evt.params.reserved);
} else if (ipa_ctx->uc_ctx.uc_sram_mmio->eventOp ==
IPA_HW_2_CPU_EVENT_MHI_CHANNEL_WAKE_UP_REQUEST) {
union IpaHwMhiChannelWakeupEventData_t evt;
IPADBG("WakeUp channel request\n");
evt.raw32b = uc_sram_mmio->eventParams;
IPADBG("channelHandle=%d reserved=%d\n",
evt.params.channelHandle, evt.params.reserved);
ipa_uc_mhi_ctx->wakeup_request_cb();
}
}
static void ipa_uc_mhi_event_log_info_hdlr(
struct IpaHwEventLogInfoData_t *uc_event_top_mmio)
{
if ((uc_event_top_mmio->featureMask & (1 << IPA_HW_FEATURE_MHI)) == 0) {
IPAERR("MHI feature missing 0x%x\n",
uc_event_top_mmio->featureMask);
return;
}
if (uc_event_top_mmio->statsInfo.featureInfo[IPA_HW_FEATURE_MHI].
params.size != sizeof(struct IpaHwStatsMhiInfoData_t)) {
IPAERR("mhi stats sz invalid exp=%zu is=%u\n",
sizeof(struct IpaHwStatsMhiInfoData_t),
uc_event_top_mmio->statsInfo.
featureInfo[IPA_HW_FEATURE_MHI].params.size);
return;
}
ipa_uc_mhi_ctx->mhi_uc_stats_ofst = uc_event_top_mmio->
statsInfo.baseAddrOffset + uc_event_top_mmio->statsInfo.
featureInfo[IPA_HW_FEATURE_MHI].params.offset;
IPAERR("MHI stats ofst=0x%x\n", ipa_uc_mhi_ctx->mhi_uc_stats_ofst);
if (ipa_uc_mhi_ctx->mhi_uc_stats_ofst +
sizeof(struct IpaHwStatsMhiInfoData_t) >=
ipa_ctx->ctrl->ipa_reg_base_ofst +
IPA_SRAM_DIRECT_ACCESS_N_OFST_v2_0(0) +
ipa_ctx->smem_sz) {
IPAERR("uc_mhi_stats 0x%x outside SRAM\n",
ipa_uc_mhi_ctx->mhi_uc_stats_ofst);
return;
}
ipa_uc_mhi_ctx->mhi_uc_stats_mmio =
ioremap(ipa_ctx->ipa_wrapper_base +
ipa_uc_mhi_ctx->mhi_uc_stats_ofst,
sizeof(struct IpaHwStatsMhiInfoData_t));
if (!ipa_uc_mhi_ctx->mhi_uc_stats_mmio) {
IPAERR("fail to ioremap uc mhi stats\n");
return;
}
}
int ipa2_uc_mhi_init(void (*ready_cb)(void), void (*wakeup_request_cb)(void))
{
struct ipa_uc_hdlrs hdlrs;
if (ipa_uc_mhi_ctx) {
IPAERR("Already initialized\n");
return -EFAULT;
}
ipa_uc_mhi_ctx = kzalloc(sizeof(*ipa_uc_mhi_ctx), GFP_KERNEL);
if (!ipa_uc_mhi_ctx) {
IPAERR("no mem\n");
return -ENOMEM;
}
ipa_uc_mhi_ctx->ready_cb = ready_cb;
ipa_uc_mhi_ctx->wakeup_request_cb = wakeup_request_cb;
memset(&hdlrs, 0, sizeof(hdlrs));
hdlrs.ipa_uc_loaded_hdlr = ipa_uc_mhi_ctx->ready_cb;
hdlrs.ipa_uc_response_hdlr = ipa_uc_mhi_response_hdlr;
hdlrs.ipa_uc_event_hdlr = ipa_uc_mhi_event_hdlr;
hdlrs.ipa_uc_event_log_info_hdlr = ipa_uc_mhi_event_log_info_hdlr;
ipa_uc_register_handlers(IPA_HW_FEATURE_MHI, &hdlrs);
IPADBG("Done\n");
return 0;
}
void ipa2_uc_mhi_cleanup(void)
{
struct ipa_uc_hdlrs null_hdlrs = { 0 };
IPADBG("Enter\n");
if (!ipa_uc_mhi_ctx) {
IPAERR("ipa3_uc_mhi_ctx is not initialized\n");
return;
}
ipa_uc_register_handlers(IPA_HW_FEATURE_MHI, &null_hdlrs);
kfree(ipa_uc_mhi_ctx);
ipa_uc_mhi_ctx = NULL;
IPADBG("Done\n");
}
int ipa_uc_mhi_init_engine(struct ipa_mhi_msi_info *msi, u32 mmio_addr,
u32 host_ctrl_addr, u32 host_data_addr, u32 first_ch_idx,
u32 first_evt_idx)
{
int res;
struct ipa_mem_buffer mem;
struct IpaHwMhiInitCmdData_t *init_cmd_data;
struct IpaHwMhiMsiCmdData_t *msi_cmd;
if (!ipa_uc_mhi_ctx) {
IPAERR("Not initialized\n");
return -EFAULT;
}
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
res = ipa_uc_update_hw_flags(0);
if (res) {
IPAERR("ipa_uc_update_hw_flags failed %d\n", res);
goto disable_clks;
}
mem.size = sizeof(*init_cmd_data);
mem.base = dma_alloc_coherent(ipa_ctx->pdev, mem.size, &mem.phys_base,
GFP_KERNEL);
if (!mem.base) {
IPAERR("fail to alloc DMA buff of size %d\n", mem.size);
res = -ENOMEM;
goto disable_clks;
}
memset(mem.base, 0, mem.size);
init_cmd_data = (struct IpaHwMhiInitCmdData_t *)mem.base;
init_cmd_data->msiAddress = msi->addr_low;
init_cmd_data->mmioBaseAddress = mmio_addr;
init_cmd_data->deviceMhiCtrlBaseAddress = host_ctrl_addr;
init_cmd_data->deviceMhiDataBaseAddress = host_data_addr;
init_cmd_data->firstChannelIndex = first_ch_idx;
init_cmd_data->firstEventRingIndex = first_evt_idx;
res = ipa_uc_send_cmd((u32)mem.phys_base, IPA_CPU_2_HW_CMD_MHI_INIT, 0,
false, HZ);
if (res) {
IPAERR("ipa_uc_send_cmd failed %d\n", res);
dma_free_coherent(ipa_ctx->pdev, mem.size, mem.base,
mem.phys_base);
goto disable_clks;
}
dma_free_coherent(ipa_ctx->pdev, mem.size, mem.base, mem.phys_base);
mem.size = sizeof(*msi_cmd);
mem.base = dma_alloc_coherent(ipa_ctx->pdev, mem.size, &mem.phys_base,
GFP_KERNEL);
if (!mem.base) {
IPAERR("fail to alloc DMA buff of size %d\n", mem.size);
res = -ENOMEM;
goto disable_clks;
}
msi_cmd = (struct IpaHwMhiMsiCmdData_t *)mem.base;
msi_cmd->msiAddress_hi = msi->addr_hi;
msi_cmd->msiAddress_low = msi->addr_low;
msi_cmd->msiData = msi->data;
msi_cmd->msiMask = msi->mask;
res = ipa_uc_send_cmd((u32)mem.phys_base,
IPA_CPU_2_HW_CMD_MHI_UPDATE_MSI, 0, false, HZ);
if (res) {
IPAERR("ipa_uc_send_cmd failed %d\n", res);
dma_free_coherent(ipa_ctx->pdev, mem.size, mem.base,
mem.phys_base);
goto disable_clks;
}
dma_free_coherent(ipa_ctx->pdev, mem.size, mem.base, mem.phys_base);
res = 0;
disable_clks:
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
return res;
}
int ipa_uc_mhi_init_channel(int ipa_ep_idx, int channelHandle,
int contexArrayIndex, int channelDirection)
{
int res;
union IpaHwMhiInitChannelCmdData_t init_cmd;
union IpaHwMhiChangeChannelStateResponseData_t uc_rsp;
if (!ipa_uc_mhi_ctx) {
IPAERR("Not initialized\n");
return -EFAULT;
}
if (ipa_ep_idx < 0 || ipa_ep_idx >= ipa_ctx->ipa_num_pipes) {
IPAERR("Invalid ipa_ep_idx.\n");
return -EINVAL;
}
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
memset(&uc_rsp, 0, sizeof(uc_rsp));
uc_rsp.params.state = IPA_HW_MHI_CHANNEL_STATE_RUN;
uc_rsp.params.channelHandle = channelHandle;
ipa_uc_mhi_ctx->expected_responseOp =
IPA_HW_2_CPU_RESPONSE_MHI_CHANGE_CHANNEL_STATE;
ipa_uc_mhi_ctx->expected_responseParams = uc_rsp.raw32b;
memset(&init_cmd, 0, sizeof(init_cmd));
init_cmd.params.channelHandle = channelHandle;
init_cmd.params.contexArrayIndex = contexArrayIndex;
init_cmd.params.bamPipeId = ipa_ep_idx;
init_cmd.params.channelDirection = channelDirection;
res = ipa_uc_send_cmd(init_cmd.raw32b,
IPA_CPU_2_HW_CMD_MHI_INIT_CHANNEL, 0, false, HZ);
if (res) {
IPAERR("ipa_uc_send_cmd failed %d\n", res);
goto disable_clks;
}
res = 0;
disable_clks:
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
return res;
}
int ipa2_uc_mhi_reset_channel(int channelHandle)
{
union IpaHwMhiChangeChannelStateCmdData_t cmd;
union IpaHwMhiChangeChannelStateResponseData_t uc_rsp;
int res;
if (!ipa_uc_mhi_ctx) {
IPAERR("Not initialized\n");
return -EFAULT;
}
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
memset(&uc_rsp, 0, sizeof(uc_rsp));
uc_rsp.params.state = IPA_HW_MHI_CHANNEL_STATE_DISABLE;
uc_rsp.params.channelHandle = channelHandle;
ipa_uc_mhi_ctx->expected_responseOp =
IPA_HW_2_CPU_RESPONSE_MHI_CHANGE_CHANNEL_STATE;
ipa_uc_mhi_ctx->expected_responseParams = uc_rsp.raw32b;
memset(&cmd, 0, sizeof(cmd));
cmd.params.requestedState = IPA_HW_MHI_CHANNEL_STATE_DISABLE;
cmd.params.channelHandle = channelHandle;
res = ipa_uc_send_cmd(cmd.raw32b,
IPA_CPU_2_HW_CMD_MHI_CHANGE_CHANNEL_STATE, 0, false, HZ);
if (res) {
IPAERR("ipa_uc_send_cmd failed %d\n", res);
goto disable_clks;
}
res = 0;
disable_clks:
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
return res;
}
int ipa2_uc_mhi_suspend_channel(int channelHandle)
{
union IpaHwMhiChangeChannelStateCmdData_t cmd;
union IpaHwMhiChangeChannelStateResponseData_t uc_rsp;
int res;
if (!ipa_uc_mhi_ctx) {
IPAERR("Not initialized\n");
return -EFAULT;
}
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
memset(&uc_rsp, 0, sizeof(uc_rsp));
uc_rsp.params.state = IPA_HW_MHI_CHANNEL_STATE_SUSPEND;
uc_rsp.params.channelHandle = channelHandle;
ipa_uc_mhi_ctx->expected_responseOp =
IPA_HW_2_CPU_RESPONSE_MHI_CHANGE_CHANNEL_STATE;
ipa_uc_mhi_ctx->expected_responseParams = uc_rsp.raw32b;
memset(&cmd, 0, sizeof(cmd));
cmd.params.requestedState = IPA_HW_MHI_CHANNEL_STATE_SUSPEND;
cmd.params.channelHandle = channelHandle;
res = ipa_uc_send_cmd(cmd.raw32b,
IPA_CPU_2_HW_CMD_MHI_CHANGE_CHANNEL_STATE, 0, false, HZ);
if (res) {
IPAERR("ipa_uc_send_cmd failed %d\n", res);
goto disable_clks;
}
res = 0;
disable_clks:
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
return res;
}
int ipa_uc_mhi_resume_channel(int channelHandle, bool LPTransitionRejected)
{
union IpaHwMhiChangeChannelStateCmdData_t cmd;
union IpaHwMhiChangeChannelStateResponseData_t uc_rsp;
int res;
if (!ipa_uc_mhi_ctx) {
IPAERR("Not initialized\n");
return -EFAULT;
}
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
memset(&uc_rsp, 0, sizeof(uc_rsp));
uc_rsp.params.state = IPA_HW_MHI_CHANNEL_STATE_RUN;
uc_rsp.params.channelHandle = channelHandle;
ipa_uc_mhi_ctx->expected_responseOp =
IPA_HW_2_CPU_RESPONSE_MHI_CHANGE_CHANNEL_STATE;
ipa_uc_mhi_ctx->expected_responseParams = uc_rsp.raw32b;
memset(&cmd, 0, sizeof(cmd));
cmd.params.requestedState = IPA_HW_MHI_CHANNEL_STATE_RUN;
cmd.params.channelHandle = channelHandle;
cmd.params.LPTransitionRejected = LPTransitionRejected;
res = ipa_uc_send_cmd(cmd.raw32b,
IPA_CPU_2_HW_CMD_MHI_CHANGE_CHANNEL_STATE, 0, false, HZ);
if (res) {
IPAERR("ipa_uc_send_cmd failed %d\n", res);
goto disable_clks;
}
res = 0;
disable_clks:
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
return res;
}
int ipa2_uc_mhi_stop_event_update_channel(int channelHandle)
{
union IpaHwMhiStopEventUpdateData_t cmd;
int res;
if (!ipa_uc_mhi_ctx) {
IPAERR("Not initialized\n");
return -EFAULT;
}
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
memset(&cmd, 0, sizeof(cmd));
cmd.params.channelHandle = channelHandle;
ipa_uc_mhi_ctx->expected_responseOp =
IPA_CPU_2_HW_CMD_MHI_STOP_EVENT_UPDATE;
ipa_uc_mhi_ctx->expected_responseParams = cmd.raw32b;
res = ipa_uc_send_cmd(cmd.raw32b,
IPA_CPU_2_HW_CMD_MHI_STOP_EVENT_UPDATE, 0, false, HZ);
if (res) {
IPAERR("ipa_uc_send_cmd failed %d\n", res);
goto disable_clks;
}
res = 0;
disable_clks:
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
return res;
}
int ipa2_uc_mhi_send_dl_ul_sync_info(union IpaHwMhiDlUlSyncCmdData_t *cmd)
{
int res;
if (!ipa_uc_mhi_ctx) {
IPAERR("Not initialized\n");
return -EFAULT;
}
IPADBG("isDlUlSyncEnabled=0x%x UlAccmVal=0x%x\n",
cmd->params.isDlUlSyncEnabled, cmd->params.UlAccmVal);
IPADBG("ulMsiEventThreshold=0x%x dlMsiEventThreshold=0x%x\n",
cmd->params.ulMsiEventThreshold,
cmd->params.dlMsiEventThreshold);
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
res = ipa_uc_send_cmd(cmd->raw32b,
IPA_CPU_2_HW_CMD_MHI_DL_UL_SYNC_INFO, 0, false, HZ);
if (res) {
IPAERR("ipa_uc_send_cmd failed %d\n", res);
goto disable_clks;
}
res = 0;
disable_clks:
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
return res;
}
int ipa2_uc_mhi_print_stats(char *dbg_buff, int size)
{
int nBytes = 0;
int i;
if (!ipa_uc_mhi_ctx->mhi_uc_stats_mmio) {
IPAERR("MHI uc stats is not valid\n");
return 0;
}
nBytes += scnprintf(&dbg_buff[nBytes], size - nBytes,
"Common Stats:\n");
PRINT_COMMON_STATS(numULDLSync);
PRINT_COMMON_STATS(numULTimerExpired);
PRINT_COMMON_STATS(numChEvCtxWpRead);
for (i = 0; i < IPA_HW_MAX_NUMBER_OF_CHANNELS; i++) {
nBytes += scnprintf(&dbg_buff[nBytes], size - nBytes,
"Channel %d Stats:\n", i);
PRINT_CHANNEL_STATS(i, doorbellInt);
PRINT_CHANNEL_STATS(i, reProccesed);
PRINT_CHANNEL_STATS(i, bamFifoFull);
PRINT_CHANNEL_STATS(i, bamFifoEmpty);
PRINT_CHANNEL_STATS(i, bamFifoUsageHigh);
PRINT_CHANNEL_STATS(i, bamFifoUsageLow);
PRINT_CHANNEL_STATS(i, bamInt);
PRINT_CHANNEL_STATS(i, ringFull);
PRINT_CHANNEL_STATS(i, ringEmpty);
PRINT_CHANNEL_STATS(i, ringUsageHigh);
PRINT_CHANNEL_STATS(i, ringUsageLow);
PRINT_CHANNEL_STATS(i, delayedMsi);
PRINT_CHANNEL_STATS(i, immediateMsi);
PRINT_CHANNEL_STATS(i, thresholdMsi);
PRINT_CHANNEL_STATS(i, numSuspend);
PRINT_CHANNEL_STATS(i, numResume);
PRINT_CHANNEL_STATS(i, num_OOB);
PRINT_CHANNEL_STATS(i, num_OOB_timer_expiry);
PRINT_CHANNEL_STATS(i, num_OOB_moderation_timer_start);
PRINT_CHANNEL_STATS(i, num_db_mode_evt);
}
return nBytes;
}

View File

@@ -0,0 +1,438 @@
/* Copyright (c) 2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include "ipa_i.h"
#define IPA_UC_NTN_DB_PA_TX 0x79620DC
#define IPA_UC_NTN_DB_PA_RX 0x79620D8
static void ipa_uc_ntn_event_handler(
struct IpaHwSharedMemCommonMapping_t *uc_sram_mmio)
{
union IpaHwNTNErrorEventData_t ntn_evt;
if (uc_sram_mmio->eventOp == IPA_HW_2_CPU_EVENT_NTN_ERROR) {
ntn_evt.raw32b = uc_sram_mmio->eventParams;
IPADBG("uC NTN evt errType=%u pipe=%d cherrType=%u\n",
ntn_evt.params.ntn_error_type,
ntn_evt.params.ipa_pipe_number,
ntn_evt.params.ntn_ch_err_type);
}
}
static void ipa_uc_ntn_event_log_info_handler(
struct IpaHwEventLogInfoData_t *uc_event_top_mmio)
{
if ((uc_event_top_mmio->featureMask & (1 << IPA_HW_FEATURE_NTN)) == 0) {
IPAERR("NTN feature missing 0x%x\n",
uc_event_top_mmio->featureMask);
return;
}
if (uc_event_top_mmio->statsInfo.featureInfo[IPA_HW_FEATURE_NTN].
params.size != sizeof(struct IpaHwStatsNTNInfoData_t)) {
IPAERR("NTN stats sz invalid exp=%zu is=%u\n",
sizeof(struct IpaHwStatsNTNInfoData_t),
uc_event_top_mmio->statsInfo.
featureInfo[IPA_HW_FEATURE_NTN].params.size);
return;
}
ipa_ctx->uc_ntn_ctx.ntn_uc_stats_ofst = uc_event_top_mmio->
statsInfo.baseAddrOffset + uc_event_top_mmio->statsInfo.
featureInfo[IPA_HW_FEATURE_NTN].params.offset;
IPAERR("NTN stats ofst=0x%x\n", ipa_ctx->uc_ntn_ctx.ntn_uc_stats_ofst);
if (ipa_ctx->uc_ntn_ctx.ntn_uc_stats_ofst +
sizeof(struct IpaHwStatsNTNInfoData_t) >=
ipa_ctx->ctrl->ipa_reg_base_ofst +
IPA_SRAM_DIRECT_ACCESS_N_OFST_v2_0(0) +
ipa_ctx->smem_sz) {
IPAERR("uc_ntn_stats 0x%x outside SRAM\n",
ipa_ctx->uc_ntn_ctx.ntn_uc_stats_ofst);
return;
}
ipa_ctx->uc_ntn_ctx.ntn_uc_stats_mmio =
ioremap(ipa_ctx->ipa_wrapper_base +
ipa_ctx->uc_ntn_ctx.ntn_uc_stats_ofst,
sizeof(struct IpaHwStatsNTNInfoData_t));
if (!ipa_ctx->uc_ntn_ctx.ntn_uc_stats_mmio) {
IPAERR("fail to ioremap uc ntn stats\n");
return;
}
}
/**
* ipa2_get_wdi_stats() - Query WDI statistics from uc
* @stats: [inout] stats blob from client populated by driver
*
* Returns: 0 on success, negative on failure
*
* @note Cannot be called from atomic context
*
*/
int ipa2_get_ntn_stats(struct IpaHwStatsNTNInfoData_t *stats)
{
#define TX_STATS(y) stats->tx_ch_stats[0].y = \
ipa_ctx->uc_ntn_ctx.ntn_uc_stats_mmio->tx_ch_stats[0].y
#define RX_STATS(y) stats->rx_ch_stats[0].y = \
ipa_ctx->uc_ntn_ctx.ntn_uc_stats_mmio->rx_ch_stats[0].y
if (unlikely(!ipa_ctx)) {
IPAERR("IPA driver was not initialized\n");
return -EINVAL;
}
if (!stats || !ipa_ctx->uc_ntn_ctx.ntn_uc_stats_mmio) {
IPAERR("bad parms stats=%p ntn_stats=%p\n",
stats,
ipa_ctx->uc_ntn_ctx.ntn_uc_stats_mmio);
return -EINVAL;
}
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
TX_STATS(num_pkts_processed);
TX_STATS(tail_ptr_val);
TX_STATS(num_db_fired);
TX_STATS(tx_comp_ring_stats.ringFull);
TX_STATS(tx_comp_ring_stats.ringEmpty);
TX_STATS(tx_comp_ring_stats.ringUsageHigh);
TX_STATS(tx_comp_ring_stats.ringUsageLow);
TX_STATS(tx_comp_ring_stats.RingUtilCount);
TX_STATS(bam_stats.bamFifoFull);
TX_STATS(bam_stats.bamFifoEmpty);
TX_STATS(bam_stats.bamFifoUsageHigh);
TX_STATS(bam_stats.bamFifoUsageLow);
TX_STATS(bam_stats.bamUtilCount);
TX_STATS(num_db);
TX_STATS(num_unexpected_db);
TX_STATS(num_bam_int_handled);
TX_STATS(num_bam_int_in_non_running_state);
TX_STATS(num_qmb_int_handled);
TX_STATS(num_bam_int_handled_while_wait_for_bam);
TX_STATS(num_bam_int_handled_while_not_in_bam);
RX_STATS(max_outstanding_pkts);
RX_STATS(num_pkts_processed);
RX_STATS(rx_ring_rp_value);
RX_STATS(rx_ind_ring_stats.ringFull);
RX_STATS(rx_ind_ring_stats.ringEmpty);
RX_STATS(rx_ind_ring_stats.ringUsageHigh);
RX_STATS(rx_ind_ring_stats.ringUsageLow);
RX_STATS(rx_ind_ring_stats.RingUtilCount);
RX_STATS(bam_stats.bamFifoFull);
RX_STATS(bam_stats.bamFifoEmpty);
RX_STATS(bam_stats.bamFifoUsageHigh);
RX_STATS(bam_stats.bamFifoUsageLow);
RX_STATS(bam_stats.bamUtilCount);
RX_STATS(num_bam_int_handled);
RX_STATS(num_db);
RX_STATS(num_unexpected_db);
RX_STATS(num_pkts_in_dis_uninit_state);
RX_STATS(num_bam_int_handled_while_not_in_bam);
RX_STATS(num_bam_int_handled_while_in_bam_state);
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
return 0;
}
int ipa2_register_ipa_ready_cb(void (*ipa_ready_cb)(void *), void *user_data)
{
int ret;
ret = ipa2_uc_state_check();
if (ret) {
ipa_ctx->uc_ntn_ctx.uc_ready_cb = ipa_ready_cb;
ipa_ctx->uc_ntn_ctx.priv = user_data;
}
return -EEXIST;
}
static void ipa_uc_ntn_loaded_handler(void)
{
if (!ipa_ctx) {
IPAERR("IPA ctx is null\n");
return;
}
if (ipa_ctx->uc_ntn_ctx.uc_ready_cb) {
ipa_ctx->uc_ntn_ctx.uc_ready_cb(
ipa_ctx->uc_ntn_ctx.priv);
ipa_ctx->uc_ntn_ctx.uc_ready_cb =
NULL;
ipa_ctx->uc_ntn_ctx.priv = NULL;
}
}
int ipa_ntn_init(void)
{
struct ipa_uc_hdlrs uc_ntn_cbs = { 0 };
uc_ntn_cbs.ipa_uc_event_hdlr = ipa_uc_ntn_event_handler;
uc_ntn_cbs.ipa_uc_event_log_info_hdlr =
ipa_uc_ntn_event_log_info_handler;
uc_ntn_cbs.ipa_uc_loaded_hdlr =
ipa_uc_ntn_loaded_handler;
ipa_uc_register_handlers(IPA_HW_FEATURE_NTN, &uc_ntn_cbs);
return 0;
}
static int ipa2_uc_send_ntn_setup_pipe_cmd(
struct ipa_ntn_setup_info *ntn_info, u8 dir)
{
int ipa_ep_idx;
int result = 0;
struct ipa_mem_buffer cmd;
struct IpaHwNtnSetUpCmdData_t *Ntn_params;
struct IpaHwOffloadSetUpCmdData_t *cmd_data;
if (ntn_info == NULL) {
IPAERR("invalid input\n");
return -EINVAL;
}
ipa_ep_idx = ipa_get_ep_mapping(ntn_info->client);
if (ipa_ep_idx == -1) {
IPAERR("fail to get ep idx.\n");
return -EFAULT;
}
IPADBG("client=%d ep=%d\n", ntn_info->client, ipa_ep_idx);
IPADBG("ring_base_pa = 0x%pa\n",
&ntn_info->ring_base_pa);
IPADBG("ntn_ring_size = %d\n", ntn_info->ntn_ring_size);
IPADBG("buff_pool_base_pa = 0x%pa\n", &ntn_info->buff_pool_base_pa);
IPADBG("num_buffers = %d\n", ntn_info->num_buffers);
IPADBG("data_buff_size = %d\n", ntn_info->data_buff_size);
IPADBG("tail_ptr_base_pa = 0x%pa\n", &ntn_info->ntn_reg_base_ptr_pa);
cmd.size = sizeof(*cmd_data);
cmd.base = dma_alloc_coherent(ipa_ctx->uc_pdev, cmd.size,
&cmd.phys_base, GFP_KERNEL);
if (cmd.base == NULL) {
IPAERR("fail to get DMA memory.\n");
return -ENOMEM;
}
cmd_data = (struct IpaHwOffloadSetUpCmdData_t *)cmd.base;
cmd_data->protocol = IPA_HW_FEATURE_NTN;
Ntn_params = &cmd_data->SetupCh_params.NtnSetupCh_params;
Ntn_params->ring_base_pa = ntn_info->ring_base_pa;
Ntn_params->buff_pool_base_pa = ntn_info->buff_pool_base_pa;
Ntn_params->ntn_ring_size = ntn_info->ntn_ring_size;
Ntn_params->num_buffers = ntn_info->num_buffers;
Ntn_params->ntn_reg_base_ptr_pa = ntn_info->ntn_reg_base_ptr_pa;
Ntn_params->data_buff_size = ntn_info->data_buff_size;
Ntn_params->ipa_pipe_number = ipa_ep_idx;
Ntn_params->dir = dir;
result = ipa_uc_send_cmd((u32)(cmd.phys_base),
IPA_CPU_2_HW_CMD_OFFLOAD_CHANNEL_SET_UP,
IPA_HW_2_CPU_OFFLOAD_CMD_STATUS_SUCCESS,
false, 10*HZ);
if (result)
result = -EFAULT;
dma_free_coherent(ipa_ctx->uc_pdev, cmd.size, cmd.base, cmd.phys_base);
return result;
}
/**
* ipa2_setup_uc_ntn_pipes() - setup uc offload pipes
*/
int ipa2_setup_uc_ntn_pipes(struct ipa_ntn_conn_in_params *in,
ipa_notify_cb notify, void *priv, u8 hdr_len,
struct ipa_ntn_conn_out_params *outp)
{
int ipa_ep_idx_ul, ipa_ep_idx_dl;
struct ipa_ep_context *ep_ul, *ep_dl;
int result = 0;
if (in == NULL) {
IPAERR("invalid input\n");
return -EINVAL;
}
ipa_ep_idx_ul = ipa_get_ep_mapping(in->ul.client);
ipa_ep_idx_dl = ipa_get_ep_mapping(in->dl.client);
if (ipa_ep_idx_ul == -1 || ipa_ep_idx_dl == -1) {
IPAERR("fail to alloc EP.\n");
return -EFAULT;
}
ep_ul = &ipa_ctx->ep[ipa_ep_idx_ul];
ep_dl = &ipa_ctx->ep[ipa_ep_idx_dl];
if (ep_ul->valid || ep_dl->valid) {
IPAERR("EP already allocated ul:%d dl:%d\n",
ep_ul->valid, ep_dl->valid);
return -EFAULT;
}
memset(ep_ul, 0, offsetof(struct ipa_ep_context, sys));
memset(ep_dl, 0, offsetof(struct ipa_ep_context, sys));
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
/* setup ul ep cfg */
ep_ul->valid = 1;
ep_ul->client = in->ul.client;
result = ipa_enable_data_path(ipa_ep_idx_ul);
if (result) {
IPAERR("disable data path failed res=%d clnt=%d.\n", result,
ipa_ep_idx_ul);
return -EFAULT;
}
ep_ul->client_notify = notify;
ep_ul->priv = priv;
memset(&ep_ul->cfg, 0, sizeof(ep_ul->cfg));
ep_ul->cfg.nat.nat_en = IPA_SRC_NAT;
ep_ul->cfg.hdr.hdr_len = hdr_len;
ep_ul->cfg.mode.mode = IPA_BASIC;
if (ipa2_cfg_ep(ipa_ep_idx_ul, &ep_ul->cfg)) {
IPAERR("fail to setup ul pipe cfg\n");
result = -EFAULT;
goto fail;
}
if (ipa2_uc_send_ntn_setup_pipe_cmd(&in->ul, IPA_NTN_RX_DIR)) {
IPAERR("fail to send cmd to uc for ul pipe\n");
result = -EFAULT;
goto fail;
}
ipa_install_dflt_flt_rules(ipa_ep_idx_ul);
outp->ul_uc_db_pa = IPA_UC_NTN_DB_PA_RX;
ep_ul->uc_offload_state |= IPA_UC_OFFLOAD_CONNECTED;
IPAERR("client %d (ep: %d) connected\n", in->ul.client,
ipa_ep_idx_ul);
/* setup dl ep cfg */
ep_dl->valid = 1;
ep_dl->client = in->dl.client;
result = ipa_enable_data_path(ipa_ep_idx_dl);
if (result) {
IPAERR("disable data path failed res=%d clnt=%d.\n", result,
ipa_ep_idx_dl);
result = -EFAULT;
goto fail;
}
memset(&ep_dl->cfg, 0, sizeof(ep_ul->cfg));
ep_dl->cfg.nat.nat_en = IPA_BYPASS_NAT;
ep_dl->cfg.hdr.hdr_len = hdr_len;
ep_dl->cfg.mode.mode = IPA_BASIC;
if (ipa2_cfg_ep(ipa_ep_idx_dl, &ep_dl->cfg)) {
IPAERR("fail to setup dl pipe cfg\n");
result = -EFAULT;
goto fail;
}
if (ipa2_uc_send_ntn_setup_pipe_cmd(&in->dl, IPA_NTN_TX_DIR)) {
IPAERR("fail to send cmd to uc for dl pipe\n");
result = -EFAULT;
goto fail;
}
outp->dl_uc_db_pa = IPA_UC_NTN_DB_PA_TX;
ep_dl->uc_offload_state |= IPA_UC_OFFLOAD_CONNECTED;
IPAERR("client %d (ep: %d) connected\n", in->dl.client,
ipa_ep_idx_dl);
fail:
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
return result;
}
/**
* ipa2_tear_down_uc_offload_pipes() - tear down uc offload pipes
*/
int ipa2_tear_down_uc_offload_pipes(int ipa_ep_idx_ul,
int ipa_ep_idx_dl)
{
struct ipa_mem_buffer cmd;
struct ipa_ep_context *ep_ul, *ep_dl;
struct IpaHwOffloadCommonChCmdData_t *cmd_data;
union IpaHwNtnCommonChCmdData_t *tear;
int result = 0;
IPADBG("ep_ul = %d\n", ipa_ep_idx_ul);
IPADBG("ep_dl = %d\n", ipa_ep_idx_dl);
ep_ul = &ipa_ctx->ep[ipa_ep_idx_ul];
ep_dl = &ipa_ctx->ep[ipa_ep_idx_dl];
if (ep_ul->uc_offload_state != IPA_UC_OFFLOAD_CONNECTED ||
ep_dl->uc_offload_state != IPA_UC_OFFLOAD_CONNECTED) {
IPAERR("channel bad state: ul %d dl %d\n",
ep_ul->uc_offload_state, ep_dl->uc_offload_state);
return -EFAULT;
}
cmd.size = sizeof(*cmd_data);
cmd.base = dma_alloc_coherent(ipa_ctx->uc_pdev, cmd.size,
&cmd.phys_base, GFP_KERNEL);
if (cmd.base == NULL) {
IPAERR("fail to get DMA memory.\n");
return -ENOMEM;
}
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
/* teardown the UL pipe */
cmd_data = (struct IpaHwOffloadCommonChCmdData_t *)cmd.base;
cmd_data->protocol = IPA_HW_FEATURE_NTN;
tear = &cmd_data->CommonCh_params.NtnCommonCh_params;
tear->params.ipa_pipe_number = ipa_ep_idx_ul;
result = ipa_uc_send_cmd((u32)(cmd.phys_base),
IPA_CPU_2_HW_CMD_OFFLOAD_TEAR_DOWN,
IPA_HW_2_CPU_OFFLOAD_CMD_STATUS_SUCCESS,
false, 10*HZ);
if (result) {
IPAERR("fail to tear down ul pipe\n");
result = -EFAULT;
goto fail;
}
ipa_disable_data_path(ipa_ep_idx_ul);
ipa_delete_dflt_flt_rules(ipa_ep_idx_ul);
memset(&ipa_ctx->ep[ipa_ep_idx_ul], 0, sizeof(struct ipa_ep_context));
IPADBG("ul client (ep: %d) disconnected\n", ipa_ep_idx_ul);
/* teardown the DL pipe */
tear->params.ipa_pipe_number = ipa_ep_idx_dl;
result = ipa_uc_send_cmd((u32)(cmd.phys_base),
IPA_CPU_2_HW_CMD_OFFLOAD_TEAR_DOWN,
IPA_HW_2_CPU_OFFLOAD_CMD_STATUS_SUCCESS,
false, 10*HZ);
if (result) {
IPAERR("fail to tear down ul pipe\n");
result = -EFAULT;
goto fail;
}
ipa_disable_data_path(ipa_ep_idx_dl);
memset(&ipa_ctx->ep[ipa_ep_idx_dl], 0, sizeof(struct ipa_ep_context));
IPADBG("dl client (ep: %d) disconnected\n", ipa_ep_idx_dl);
fail:
dma_free_coherent(ipa_ctx->uc_pdev, cmd.size, cmd.base, cmd.phys_base);
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
return result;
}

View File

@@ -0,0 +1,514 @@
/* Copyright (c) 2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _IPA_UC_OFFLOAD_I_H_
#define _IPA_UC_OFFLOAD_I_H_
#include <linux/ipa.h>
#include "ipa_i.h"
/*
* Neutrino protocol related data structures
*/
#define IPA_UC_MAX_NTN_TX_CHANNELS 1
#define IPA_UC_MAX_NTN_RX_CHANNELS 1
#define IPA_NTN_TX_DIR 1
#define IPA_NTN_RX_DIR 2
/**
* @brief Enum value determined based on the feature it
* corresponds to
* +----------------+----------------+
* | 3 bits | 5 bits |
* +----------------+----------------+
* | HW_FEATURE | OPCODE |
* +----------------+----------------+
*
*/
#define FEATURE_ENUM_VAL(feature, opcode) ((feature << 5) | opcode)
#define EXTRACT_UC_FEATURE(value) (value >> 5)
#define IPA_HW_NUM_FEATURES 0x8
/**
* enum ipa_hw_features - Values that represent the features supported in IPA HW
* @IPA_HW_FEATURE_COMMON : Feature related to common operation of IPA HW
* @IPA_HW_FEATURE_MHI : Feature related to MHI operation in IPA HW
* @IPA_HW_FEATURE_WDI : Feature related to WDI operation in IPA HW
* @IPA_HW_FEATURE_NTN : Feature related to NTN operation in IPA HW
* @IPA_HW_FEATURE_OFFLOAD : Feature related to NTN operation in IPA HW
*/
enum ipa_hw_features {
IPA_HW_FEATURE_COMMON = 0x0,
IPA_HW_FEATURE_MHI = 0x1,
IPA_HW_FEATURE_WDI = 0x3,
IPA_HW_FEATURE_NTN = 0x4,
IPA_HW_FEATURE_OFFLOAD = 0x5,
IPA_HW_FEATURE_MAX = IPA_HW_NUM_FEATURES
};
/**
* struct IpaHwSharedMemCommonMapping_t - Structure referring to the common
* section in 128B shared memory located in offset zero of SW Partition in IPA
* SRAM.
* @cmdOp : CPU->HW command opcode. See IPA_CPU_2_HW_COMMANDS
* @cmdParams : CPU->HW command parameter. The parameter filed can hold 32 bits
* of parameters (immediate parameters) and point on structure in
* system memory (in such case the address must be accessible
* for HW)
* @responseOp : HW->CPU response opcode. See IPA_HW_2_CPU_RESPONSES
* @responseParams : HW->CPU response parameter. The parameter filed can hold
* 32 bits of parameters (immediate parameters) and point
* on structure in system memory
* @eventOp : HW->CPU event opcode. See IPA_HW_2_CPU_EVENTS
* @eventParams : HW->CPU event parameter. The parameter filed can hold 32 bits
* of parameters (immediate parameters) and point on
* structure in system memory
* @firstErrorAddress : Contains the address of first error-source on SNOC
* @hwState : State of HW. The state carries information regarding the error
* type.
* @warningCounter : The warnings counter. The counter carries information
* regarding non fatal errors in HW
* @interfaceVersionCommon : The Common interface version as reported by HW
*
* The shared memory is used for communication between IPA HW and CPU.
*/
struct IpaHwSharedMemCommonMapping_t {
u8 cmdOp;
u8 reserved_01;
u16 reserved_03_02;
u32 cmdParams;
u8 responseOp;
u8 reserved_09;
u16 reserved_0B_0A;
u32 responseParams;
u8 eventOp;
u8 reserved_11;
u16 reserved_13_12;
u32 eventParams;
u32 reserved_1B_18;
u32 firstErrorAddress;
u8 hwState;
u8 warningCounter;
u16 reserved_23_22;
u16 interfaceVersionCommon;
u16 reserved_27_26;
} __packed;
/**
* union IpaHwFeatureInfoData_t - parameters for stats/config blob
*
* @offset : Location of a feature within the EventInfoData
* @size : Size of the feature
*/
union IpaHwFeatureInfoData_t {
struct IpaHwFeatureInfoParams_t {
u32 offset:16;
u32 size:16;
} __packed params;
u32 raw32b;
} __packed;
/**
* struct IpaHwEventInfoData_t - Structure holding the parameters for
* statistics and config info
*
* @baseAddrOffset : Base Address Offset of the statistics or config
* structure from IPA_WRAPPER_BASE
* @IpaHwFeatureInfoData_t : Location and size of each feature within
* the statistics or config structure
*
* @note Information about each feature in the featureInfo[]
* array is populated at predefined indices per the IPA_HW_FEATURES
* enum definition
*/
struct IpaHwEventInfoData_t {
u32 baseAddrOffset;
union IpaHwFeatureInfoData_t featureInfo[IPA_HW_NUM_FEATURES];
} __packed;
/**
* struct IpaHwEventLogInfoData_t - Structure holding the parameters for
* IPA_HW_2_CPU_EVENT_LOG_INFO Event
*
* @featureMask : Mask indicating the features enabled in HW.
* Refer IPA_HW_FEATURE_MASK
* @circBuffBaseAddrOffset : Base Address Offset of the Circular Event
* Log Buffer structure
* @statsInfo : Statistics related information
* @configInfo : Configuration related information
*
* @note The offset location of this structure from IPA_WRAPPER_BASE
* will be provided as Event Params for the IPA_HW_2_CPU_EVENT_LOG_INFO
* Event
*/
struct IpaHwEventLogInfoData_t {
u32 featureMask;
u32 circBuffBaseAddrOffset;
struct IpaHwEventInfoData_t statsInfo;
struct IpaHwEventInfoData_t configInfo;
} __packed;
/**
* struct ipa_uc_ntn_ctx
* @ntn_uc_stats_ofst: Neutrino stats offset
* @ntn_uc_stats_mmio: Neutrino stats
* @priv: private data of client
* @uc_ready_cb: uc Ready cb
*/
struct ipa_uc_ntn_ctx {
u32 ntn_uc_stats_ofst;
struct IpaHwStatsNTNInfoData_t *ntn_uc_stats_mmio;
void *priv;
ipa_uc_ready_cb uc_ready_cb;
};
/**
* enum ipa_hw_2_cpu_ntn_events - Values that represent HW event
* to be sent to CPU
* @IPA_HW_2_CPU_EVENT_NTN_ERROR : Event to specify that HW
* detected an error in NTN
*
*/
enum ipa_hw_2_cpu_ntn_events {
IPA_HW_2_CPU_EVENT_NTN_ERROR =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_NTN, 0),
};
/**
* enum ipa_hw_ntn_errors - NTN specific error types.
* @IPA_HW_NTN_ERROR_NONE : No error persists
* @IPA_HW_NTN_CHANNEL_ERROR : Error is specific to channel
*/
enum ipa_hw_ntn_errors {
IPA_HW_NTN_ERROR_NONE = 0,
IPA_HW_NTN_CHANNEL_ERROR = 1
};
/**
* enum ipa_hw_ntn_channel_states - Values that represent NTN
* channel state machine.
* @IPA_HW_NTN_CHANNEL_STATE_INITED_DISABLED : Channel is
* initialized but disabled
* @IPA_HW_NTN_CHANNEL_STATE_RUNNING : Channel is running.
* Entered after SET_UP_COMMAND is processed successfully
* @IPA_HW_NTN_CHANNEL_STATE_ERROR : Channel is in error state
* @IPA_HW_NTN_CHANNEL_STATE_INVALID : Invalid state. Shall not
* be in use in operational scenario
*
* These states apply to both Tx and Rx paths. These do not reflect the
* sub-state the state machine may be in.
*/
enum ipa_hw_ntn_channel_states {
IPA_HW_NTN_CHANNEL_STATE_INITED_DISABLED = 1,
IPA_HW_NTN_CHANNEL_STATE_RUNNING = 2,
IPA_HW_NTN_CHANNEL_STATE_ERROR = 3,
IPA_HW_NTN_CHANNEL_STATE_INVALID = 0xFF
};
/**
* enum ipa_hw_ntn_channel_errors - List of NTN Channel error
* types. This is present in the event param
* @IPA_HW_NTN_CH_ERR_NONE: No error persists
* @IPA_HW_NTN_TX_FSM_ERROR: Error in the state machine
* transition
* @IPA_HW_NTN_TX_COMP_RE_FETCH_FAIL: Error while calculating
* num RE to bring
* @IPA_HW_NTN_RX_RING_WP_UPDATE_FAIL: Write pointer update
* failed in Rx ring
* @IPA_HW_NTN_RX_FSM_ERROR: Error in the state machine
* transition
* @IPA_HW_NTN_RX_CACHE_NON_EMPTY:
* @IPA_HW_NTN_CH_ERR_RESERVED:
*
* These states apply to both Tx and Rx paths. These do not
* reflect the sub-state the state machine may be in.
*/
enum ipa_hw_ntn_channel_errors {
IPA_HW_NTN_CH_ERR_NONE = 0,
IPA_HW_NTN_TX_RING_WP_UPDATE_FAIL = 1,
IPA_HW_NTN_TX_FSM_ERROR = 2,
IPA_HW_NTN_TX_COMP_RE_FETCH_FAIL = 3,
IPA_HW_NTN_RX_RING_WP_UPDATE_FAIL = 4,
IPA_HW_NTN_RX_FSM_ERROR = 5,
IPA_HW_NTN_RX_CACHE_NON_EMPTY = 6,
IPA_HW_NTN_CH_ERR_RESERVED = 0xFF
};
/**
* struct IpaHwNtnSetUpCmdData_t - Ntn setup command data
* @ring_base_pa: physical address of the base of the Tx/Rx NTN
* ring
* @buff_pool_base_pa: physical address of the base of the Tx/Rx
* buffer pool
* @ntn_ring_size: size of the Tx/Rx NTN ring
* @num_buffers: Rx/tx buffer pool size
* @ntn_reg_base_ptr_pa: physical address of the Tx/Rx NTN
* Ring's tail pointer
* @ipa_pipe_number: IPA pipe number that has to be used for the
* Tx/Rx path
* @dir: Tx/Rx Direction
* @data_buff_size: size of the each data buffer allocated in
* DDR
*/
struct IpaHwNtnSetUpCmdData_t {
u32 ring_base_pa;
u32 buff_pool_base_pa;
u16 ntn_ring_size;
u16 num_buffers;
u32 ntn_reg_base_ptr_pa;
u8 ipa_pipe_number;
u8 dir;
u16 data_buff_size;
} __packed;
/**
* struct IpaHwNtnCommonChCmdData_t - Structure holding the
* parameters for Ntn Tear down command data params
*
*@ipa_pipe_number: IPA pipe number. This could be Tx or an Rx pipe
*/
union IpaHwNtnCommonChCmdData_t {
struct IpaHwNtnCommonChCmdParams_t {
u32 ipa_pipe_number :8;
u32 reserved :24;
} __packed params;
uint32_t raw32b;
} __packed;
/**
* struct IpaHwNTNErrorEventData_t - Structure holding the
* IPA_HW_2_CPU_EVENT_NTN_ERROR event. The parameters are passed
* as immediate params in the shared memory
*
*@ntn_error_type: type of NTN error (IPA_HW_NTN_ERRORS)
*@ipa_pipe_number: IPA pipe number on which error has happened
* Applicable only if error type indicates channel error
*@ntn_ch_err_type: Information about the channel error (if
* available)
*/
union IpaHwNTNErrorEventData_t {
struct IpaHwNTNErrorEventParams_t {
u32 ntn_error_type :8;
u32 reserved :8;
u32 ipa_pipe_number :8;
u32 ntn_ch_err_type :8;
} __packed params;
uint32_t raw32b;
} __packed;
/**
* struct NTNRxInfoData_t - NTN Structure holding the
* Rx pipe information
*
*@max_outstanding_pkts: Number of outstanding packets in Rx
* Ring
*@num_pkts_processed: Number of packets processed - cumulative
*@rx_ring_rp_value: Read pointer last advertized to the WLAN FW
*
*@ntn_ch_err_type: Information about the channel error (if
* available)
*@rx_ind_ring_stats:
*@bam_stats:
*@num_bam_int_handled: Number of Bam Interrupts handled by FW
*@num_db: Number of times the doorbell was rung
*@num_unexpected_db: Number of unexpected doorbells
*@num_pkts_in_dis_uninit_state:
*@num_bam_int_handled_while_not_in_bam: Number of Bam
* Interrupts handled by FW
*@num_bam_int_handled_while_in_bam_state: Number of Bam
* Interrupts handled by FW
*/
struct NTNRxInfoData_t {
u32 max_outstanding_pkts;
u32 num_pkts_processed;
u32 rx_ring_rp_value;
struct IpaHwRingStats_t rx_ind_ring_stats;
struct IpaHwBamStats_t bam_stats;
u32 num_bam_int_handled;
u32 num_db;
u32 num_unexpected_db;
u32 num_pkts_in_dis_uninit_state;
u32 num_bam_int_handled_while_not_in_bam;
u32 num_bam_int_handled_while_in_bam_state;
} __packed;
/**
* struct NTNTxInfoData_t - Structure holding the NTN Tx channel
* Ensure that this is always word aligned
*
*@num_pkts_processed: Number of packets processed - cumulative
*@tail_ptr_val: Latest value of doorbell written to copy engine
*@num_db_fired: Number of DB from uC FW to Copy engine
*
*@tx_comp_ring_stats:
*@bam_stats:
*@num_db: Number of times the doorbell was rung
*@num_unexpected_db: Number of unexpected doorbells
*@num_bam_int_handled: Number of Bam Interrupts handled by FW
*@num_bam_int_in_non_running_state: Number of Bam interrupts
* while not in Running state
*@num_qmb_int_handled: Number of QMB interrupts handled
*@num_bam_int_handled_while_wait_for_bam: Number of times the
* Imm Cmd is injected due to fw_desc change
*/
struct NTNTxInfoData_t {
u32 num_pkts_processed;
u32 tail_ptr_val;
u32 num_db_fired;
struct IpaHwRingStats_t tx_comp_ring_stats;
struct IpaHwBamStats_t bam_stats;
u32 num_db;
u32 num_unexpected_db;
u32 num_bam_int_handled;
u32 num_bam_int_in_non_running_state;
u32 num_qmb_int_handled;
u32 num_bam_int_handled_while_wait_for_bam;
u32 num_bam_int_handled_while_not_in_bam;
} __packed;
/**
* struct IpaHwStatsNTNInfoData_t - Structure holding the NTN Tx
* channel Ensure that this is always word aligned
*
*/
struct IpaHwStatsNTNInfoData_t {
struct NTNRxInfoData_t rx_ch_stats[IPA_UC_MAX_NTN_RX_CHANNELS];
struct NTNTxInfoData_t tx_ch_stats[IPA_UC_MAX_NTN_TX_CHANNELS];
} __packed;
/*
* uC offload related data structures
*/
#define IPA_UC_OFFLOAD_CONNECTED BIT(0)
#define IPA_UC_OFFLOAD_ENABLED BIT(1)
#define IPA_UC_OFFLOAD_RESUMED BIT(2)
/**
* enum ipa_cpu_2_hw_offload_commands - Values that represent
* the offload commands from CPU
* @IPA_CPU_2_HW_CMD_OFFLOAD_CHANNEL_SET_UP : Command to set up
* Offload protocol's Tx/Rx Path
* @IPA_CPU_2_HW_CMD_OFFLOAD_RX_SET_UP : Command to tear down
* Offload protocol's Tx/ Rx Path
*/
enum ipa_cpu_2_hw_offload_commands {
IPA_CPU_2_HW_CMD_OFFLOAD_CHANNEL_SET_UP =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 1),
IPA_CPU_2_HW_CMD_OFFLOAD_TEAR_DOWN,
};
/**
* enum ipa_hw_offload_channel_states - Values that represent
* offload channel state machine.
* @IPA_HW_OFFLOAD_CHANNEL_STATE_INITED_DISABLED : Channel is initialized
* but disabled
* @IPA_HW_OFFLOAD_CHANNEL_STATE_RUNNING : Channel is running. Entered after
* SET_UP_COMMAND is processed successfully
* @IPA_HW_OFFLOAD_CHANNEL_STATE_ERROR : Channel is in error state
* @IPA_HW_OFFLOAD_CHANNEL_STATE_INVALID : Invalid state. Shall not be in use
* in operational scenario
*
* These states apply to both Tx and Rx paths. These do not
* reflect the sub-state the state machine may be in
*/
enum ipa_hw_offload_channel_states {
IPA_HW_OFFLOAD_CHANNEL_STATE_INITED_DISABLED = 1,
IPA_HW_OFFLOAD_CHANNEL_STATE_RUNNING = 2,
IPA_HW_OFFLOAD_CHANNEL_STATE_ERROR = 3,
IPA_HW_OFFLOAD_CHANNEL_STATE_INVALID = 0xFF
};
/**
* enum ipa_hw_2_cpu_cmd_resp_status - Values that represent
* offload related command response status to be sent to CPU.
*/
enum ipa_hw_2_cpu_offload_cmd_resp_status {
IPA_HW_2_CPU_OFFLOAD_CMD_STATUS_SUCCESS =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 0),
IPA_HW_2_CPU_OFFLOAD_MAX_TX_CHANNELS =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 1),
IPA_HW_2_CPU_OFFLOAD_TX_RING_OVERRUN_POSSIBILITY =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 2),
IPA_HW_2_CPU_OFFLOAD_TX_RING_SET_UP_FAILURE =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 3),
IPA_HW_2_CPU_OFFLOAD_TX_RING_PARAMS_UNALIGNED =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 4),
IPA_HW_2_CPU_OFFLOAD_UNKNOWN_TX_CHANNEL =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 5),
IPA_HW_2_CPU_OFFLOAD_TX_INVALID_FSM_TRANSITION =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 6),
IPA_HW_2_CPU_OFFLOAD_TX_FSM_TRANSITION_ERROR =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 7),
IPA_HW_2_CPU_OFFLOAD_MAX_RX_CHANNELS =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 8),
IPA_HW_2_CPU_OFFLOAD_RX_RING_PARAMS_UNALIGNED =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 9),
IPA_HW_2_CPU_OFFLOAD_RX_RING_SET_UP_FAILURE =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 10),
IPA_HW_2_CPU_OFFLOAD_UNKNOWN_RX_CHANNEL =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 11),
IPA_HW_2_CPU_OFFLOAD_RX_INVALID_FSM_TRANSITION =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 12),
IPA_HW_2_CPU_OFFLOAD_RX_FSM_TRANSITION_ERROR =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 13),
IPA_HW_2_CPU_OFFLOAD_RX_RING_OVERRUN_POSSIBILITY =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 14),
};
/**
* struct IpaHwSetUpCmd -
*
*
*/
union IpaHwSetUpCmd {
struct IpaHwNtnSetUpCmdData_t NtnSetupCh_params;
} __packed;
/**
* struct IpaHwOffloadSetUpCmdData_t -
*
*
*/
struct IpaHwOffloadSetUpCmdData_t {
u8 protocol;
union IpaHwSetUpCmd SetupCh_params;
} __packed;
/**
* struct IpaHwCommonChCmd - Structure holding the parameters
* for IPA_CPU_2_HW_CMD_OFFLOAD_TEAR_DOWN
*
*
*/
union IpaHwCommonChCmd {
union IpaHwNtnCommonChCmdData_t NtnCommonCh_params;
} __packed;
struct IpaHwOffloadCommonChCmdData_t {
u8 protocol;
union IpaHwCommonChCmd CommonCh_params;
} __packed;
#endif /* _IPA_UC_OFFLOAD_I_H_ */

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,391 @@
/* Copyright (c) 2013-2015, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/init.h>
#include <linux/module.h>
#include <linux/fs.h>
#include <linux/cdev.h>
#include <linux/device.h>
#include <linux/slab.h>
#include <linux/uaccess.h>
#include <linux/rmnet_ipa_fd_ioctl.h>
#include "ipa_qmi_service.h"
#define DRIVER_NAME "wwan_ioctl"
#ifdef CONFIG_COMPAT
#define WAN_IOC_ADD_FLT_RULE32 _IOWR(WAN_IOC_MAGIC, \
WAN_IOCTL_ADD_FLT_RULE, \
compat_uptr_t)
#define WAN_IOC_ADD_FLT_RULE_INDEX32 _IOWR(WAN_IOC_MAGIC, \
WAN_IOCTL_ADD_FLT_INDEX, \
compat_uptr_t)
#define WAN_IOC_POLL_TETHERING_STATS32 _IOWR(WAN_IOC_MAGIC, \
WAN_IOCTL_POLL_TETHERING_STATS, \
compat_uptr_t)
#define WAN_IOC_SET_DATA_QUOTA32 _IOWR(WAN_IOC_MAGIC, \
WAN_IOCTL_SET_DATA_QUOTA, \
compat_uptr_t)
#define WAN_IOC_SET_TETHER_CLIENT_PIPE32 _IOWR(WAN_IOC_MAGIC, \
WAN_IOCTL_SET_TETHER_CLIENT_PIPE, \
compat_uptr_t)
#define WAN_IOC_QUERY_TETHER_STATS32 _IOWR(WAN_IOC_MAGIC, \
WAN_IOCTL_QUERY_TETHER_STATS, \
compat_uptr_t)
#define WAN_IOC_RESET_TETHER_STATS32 _IOWR(WAN_IOC_MAGIC, \
WAN_IOCTL_RESET_TETHER_STATS, \
compat_uptr_t)
#define WAN_IOC_QUERY_DL_FILTER_STATS32 _IOWR(WAN_IOC_MAGIC, \
WAN_IOCTL_QUERY_DL_FILTER_STATS, \
compat_uptr_t)
#endif
static unsigned int dev_num = 1;
static struct cdev wan_ioctl_cdev;
static unsigned int process_ioctl = 1;
static struct class *class;
static dev_t device;
static long wan_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
int retval = 0;
u32 pyld_sz;
u8 *param = NULL;
IPAWANDBG("device %s got ioctl events :>>>\n",
DRIVER_NAME);
if (!process_ioctl) {
IPAWANDBG("modem is in SSR, ignoring ioctl\n");
return -EAGAIN;
}
switch (cmd) {
case WAN_IOC_ADD_FLT_RULE:
IPAWANDBG("device %s got WAN_IOC_ADD_FLT_RULE :>>>\n",
DRIVER_NAME);
pyld_sz = sizeof(struct ipa_install_fltr_rule_req_msg_v01);
param = kzalloc(pyld_sz, GFP_KERNEL);
if (!param) {
retval = -ENOMEM;
break;
}
if (copy_from_user(param, (u8 *)arg, pyld_sz)) {
retval = -EFAULT;
break;
}
if (qmi_filter_request_send(
(struct ipa_install_fltr_rule_req_msg_v01 *)param)) {
IPAWANDBG("IPACM->Q6 add filter rule failed\n");
retval = -EFAULT;
break;
}
if (copy_to_user((u8 *)arg, param, pyld_sz)) {
retval = -EFAULT;
break;
}
break;
case WAN_IOC_ADD_FLT_RULE_INDEX:
IPAWANDBG("device %s got WAN_IOC_ADD_FLT_RULE_INDEX :>>>\n",
DRIVER_NAME);
pyld_sz = sizeof(struct ipa_fltr_installed_notif_req_msg_v01);
param = kzalloc(pyld_sz, GFP_KERNEL);
if (!param) {
retval = -ENOMEM;
break;
}
if (copy_from_user(param, (u8 *)arg, pyld_sz)) {
retval = -EFAULT;
break;
}
if (qmi_filter_notify_send(
(struct ipa_fltr_installed_notif_req_msg_v01 *)param)) {
IPAWANDBG("IPACM->Q6 rule index fail\n");
retval = -EFAULT;
break;
}
if (copy_to_user((u8 *)arg, param, pyld_sz)) {
retval = -EFAULT;
break;
}
break;
case WAN_IOC_VOTE_FOR_BW_MBPS:
IPAWANDBG("device %s got WAN_IOC_VOTE_FOR_BW_MBPS :>>>\n",
DRIVER_NAME);
pyld_sz = sizeof(uint32_t);
param = kzalloc(pyld_sz, GFP_KERNEL);
if (!param) {
retval = -ENOMEM;
break;
}
if (copy_from_user(param, (u8 *)arg, pyld_sz)) {
retval = -EFAULT;
break;
}
if (vote_for_bus_bw((uint32_t *)param)) {
IPAWANERR("Failed to vote for bus BW\n");
retval = -EFAULT;
break;
}
if (copy_to_user((u8 *)arg, param, pyld_sz)) {
retval = -EFAULT;
break;
}
break;
case WAN_IOC_POLL_TETHERING_STATS:
IPAWANDBG("device %s got WAN_IOCTL_POLL_TETHERING_STATS :>>>\n",
DRIVER_NAME);
pyld_sz = sizeof(struct wan_ioctl_poll_tethering_stats);
param = kzalloc(pyld_sz, GFP_KERNEL);
if (!param) {
retval = -ENOMEM;
break;
}
if (copy_from_user(param, (u8 *)arg, pyld_sz)) {
retval = -EFAULT;
break;
}
if (rmnet_ipa_poll_tethering_stats(
(struct wan_ioctl_poll_tethering_stats *)param)) {
IPAWANERR("WAN_IOCTL_POLL_TETHERING_STATS failed\n");
retval = -EFAULT;
break;
}
if (copy_to_user((u8 *)arg, param, pyld_sz)) {
retval = -EFAULT;
break;
}
break;
case WAN_IOC_SET_DATA_QUOTA:
IPAWANDBG("device %s got WAN_IOCTL_SET_DATA_QUOTA :>>>\n",
DRIVER_NAME);
pyld_sz = sizeof(struct wan_ioctl_set_data_quota);
param = kzalloc(pyld_sz, GFP_KERNEL);
if (!param) {
retval = -ENOMEM;
break;
}
if (copy_from_user(param, (u8 *)arg, pyld_sz)) {
retval = -EFAULT;
break;
}
if (rmnet_ipa_set_data_quota(
(struct wan_ioctl_set_data_quota *)param)) {
IPAWANERR("WAN_IOC_SET_DATA_QUOTA failed\n");
retval = -EFAULT;
break;
}
if (copy_to_user((u8 *)arg, param, pyld_sz)) {
retval = -EFAULT;
break;
}
break;
case WAN_IOC_SET_TETHER_CLIENT_PIPE:
IPAWANDBG("device %s got WAN_IOC_SET_TETHER_CLIENT_PIPE :>>>\n",
DRIVER_NAME);
pyld_sz = sizeof(struct wan_ioctl_set_tether_client_pipe);
param = kzalloc(pyld_sz, GFP_KERNEL);
if (!param) {
retval = -ENOMEM;
break;
}
if (copy_from_user(param, (u8 *)arg, pyld_sz)) {
retval = -EFAULT;
break;
}
if (rmnet_ipa_set_tether_client_pipe(
(struct wan_ioctl_set_tether_client_pipe *)param)) {
IPAWANERR("WAN_IOC_SET_TETHER_CLIENT_PIPE failed\n");
retval = -EFAULT;
break;
}
break;
case WAN_IOC_QUERY_TETHER_STATS:
IPAWANDBG("device %s got WAN_IOC_QUERY_TETHER_STATS :>>>\n",
DRIVER_NAME);
pyld_sz = sizeof(struct wan_ioctl_query_tether_stats);
param = kzalloc(pyld_sz, GFP_KERNEL);
if (!param) {
retval = -ENOMEM;
break;
}
if (copy_from_user(param, (u8 *)arg, pyld_sz)) {
retval = -EFAULT;
break;
}
if (rmnet_ipa_query_tethering_stats(
(struct wan_ioctl_query_tether_stats *)param, false)) {
IPAWANERR("WAN_IOC_QUERY_TETHER_STATS failed\n");
retval = -EFAULT;
break;
}
if (copy_to_user((u8 *)arg, param, pyld_sz)) {
retval = -EFAULT;
break;
}
break;
case WAN_IOC_RESET_TETHER_STATS:
IPAWANDBG("device %s got WAN_IOC_RESET_TETHER_STATS :>>>\n",
DRIVER_NAME);
pyld_sz = sizeof(struct wan_ioctl_reset_tether_stats);
param = kzalloc(pyld_sz, GFP_KERNEL);
if (!param) {
retval = -ENOMEM;
break;
}
if (copy_from_user(param, (u8 *)arg, pyld_sz)) {
retval = -EFAULT;
break;
}
if (rmnet_ipa_query_tethering_stats(NULL, true)) {
IPAWANERR("WAN_IOC_QUERY_TETHER_STATS failed\n");
retval = -EFAULT;
break;
}
break;
default:
retval = -ENOTTY;
}
kfree(param);
return retval;
}
#ifdef CONFIG_COMPAT
long compat_wan_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
switch (cmd) {
case WAN_IOC_ADD_FLT_RULE32:
cmd = WAN_IOC_ADD_FLT_RULE;
break;
case WAN_IOC_ADD_FLT_RULE_INDEX32:
cmd = WAN_IOC_ADD_FLT_RULE_INDEX;
break;
case WAN_IOC_POLL_TETHERING_STATS32:
cmd = WAN_IOC_POLL_TETHERING_STATS;
break;
case WAN_IOC_SET_DATA_QUOTA32:
cmd = WAN_IOC_SET_DATA_QUOTA;
break;
case WAN_IOC_SET_TETHER_CLIENT_PIPE32:
cmd = WAN_IOC_SET_TETHER_CLIENT_PIPE;
break;
case WAN_IOC_QUERY_TETHER_STATS32:
cmd = WAN_IOC_QUERY_TETHER_STATS;
break;
case WAN_IOC_RESET_TETHER_STATS32:
cmd = WAN_IOC_RESET_TETHER_STATS;
break;
case WAN_IOC_QUERY_DL_FILTER_STATS32:
cmd = WAN_IOC_QUERY_DL_FILTER_STATS;
break;
default:
return -ENOIOCTLCMD;
}
return wan_ioctl(file, cmd, (unsigned long) compat_ptr(arg));
}
#endif
static int wan_ioctl_open(struct inode *inode, struct file *filp)
{
IPAWANDBG("\n IPA A7 wan_ioctl open OK :>>>> ");
return 0;
}
const struct file_operations fops = {
.owner = THIS_MODULE,
.open = wan_ioctl_open,
.read = NULL,
.unlocked_ioctl = wan_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = compat_wan_ioctl,
#endif
};
int wan_ioctl_init(void)
{
unsigned int wan_ioctl_major = 0;
int ret;
struct device *dev;
device = MKDEV(wan_ioctl_major, 0);
ret = alloc_chrdev_region(&device, 0, dev_num, DRIVER_NAME);
if (ret) {
IPAWANERR(":device_alloc err.\n");
goto dev_alloc_err;
}
wan_ioctl_major = MAJOR(device);
class = class_create(THIS_MODULE, DRIVER_NAME);
if (IS_ERR(class)) {
IPAWANERR(":class_create err.\n");
goto class_err;
}
dev = device_create(class, NULL, device,
NULL, DRIVER_NAME);
if (IS_ERR(dev)) {
IPAWANERR(":device_create err.\n");
goto device_err;
}
cdev_init(&wan_ioctl_cdev, &fops);
ret = cdev_add(&wan_ioctl_cdev, device, dev_num);
if (ret) {
IPAWANERR(":cdev_add err.\n");
goto cdev_add_err;
}
process_ioctl = 1;
IPAWANDBG("IPA %s major(%d) initial ok :>>>>\n",
DRIVER_NAME, wan_ioctl_major);
return 0;
cdev_add_err:
device_destroy(class, device);
device_err:
class_destroy(class);
class_err:
unregister_chrdev_region(device, dev_num);
dev_alloc_err:
return -ENODEV;
}
void wan_ioctl_stop_qmi_messages(void)
{
process_ioctl = 0;
}
void wan_ioctl_enable_qmi_messages(void)
{
process_ioctl = 1;
}
void wan_ioctl_deinit(void)
{
cdev_del(&wan_ioctl_cdev);
device_destroy(class, device);
class_destroy(class);
unregister_chrdev_region(device, dev_num);
}

View File

@@ -0,0 +1,240 @@
/* Copyright (c) 2013-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/completion.h>
#include <linux/debugfs.h>
#include <linux/export.h>
#include <linux/fs.h>
#include <linux/if_ether.h>
#include <linux/ioctl.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/msm_ipa.h>
#include <linux/mutex.h>
#include <linux/skbuff.h>
#include <linux/types.h>
#include <linux/ipa.h>
#include <linux/netdevice.h>
#include "ipa_i.h"
#define TETH_BRIDGE_DRV_NAME "ipa_tethering_bridge"
#define TETH_DBG(fmt, args...) \
pr_debug(TETH_BRIDGE_DRV_NAME " %s:%d " fmt, \
__func__, __LINE__, ## args)
#define TETH_DBG_FUNC_ENTRY() \
pr_debug(TETH_BRIDGE_DRV_NAME " %s:%d ENTRY\n", __func__, __LINE__)
#define TETH_DBG_FUNC_EXIT() \
pr_debug(TETH_BRIDGE_DRV_NAME " %s:%d EXIT\n", __func__, __LINE__)
#define TETH_ERR(fmt, args...) \
pr_err(TETH_BRIDGE_DRV_NAME " %s:%d " fmt, __func__, __LINE__, ## args)
/**
* struct teth_bridge_ctx - Tethering bridge driver context information
* @class: kernel class pointer
* @dev_num: kernel device number
* @dev: kernel device struct pointer
* @cdev: kernel character device struct
*/
struct teth_bridge_ctx {
struct class *class;
dev_t dev_num;
struct device *dev;
struct cdev cdev;
};
static struct teth_bridge_ctx *teth_ctx;
/**
* teth_bridge_ipa_cb() - Callback to handle IPA data path events
* @priv - private data
* @evt - event type
* @data - event specific data (usually skb)
*
* This callback is called by IPA driver for exception packets from USB.
* All exception packets are handled by Q6 and should not reach this function.
* Packets will arrive to AP exception pipe only in case where packets are
* sent from USB before Q6 has setup the call.
*/
static void teth_bridge_ipa_cb(void *priv, enum ipa_dp_evt_type evt,
unsigned long data)
{
struct sk_buff *skb = (struct sk_buff *)data;
TETH_DBG_FUNC_ENTRY();
if (evt != IPA_RECEIVE) {
TETH_ERR("unexpected event %d\n", evt);
WARN_ON(1);
return;
}
TETH_ERR("Unexpected exception packet from USB, dropping packet\n");
dev_kfree_skb_any(skb);
TETH_DBG_FUNC_EXIT();
}
/**
* ipa2_teth_bridge_init() - Initialize the Tethering bridge driver
* @params - in/out params for USB initialization API (please look at struct
* definition for more info)
*
* USB driver gets a pointer to a callback function (usb_notify_cb) and an
* associated data. USB driver installs this callback function in the call to
* ipa_connect().
*
* Builds IPA resource manager dependency graph.
*
* Return codes: 0: success,
* -EINVAL - Bad parameter
* Other negative value - Failure
*/
int ipa2_teth_bridge_init(struct teth_bridge_init_params *params)
{
int res = 0;
TETH_DBG_FUNC_ENTRY();
if (!params) {
TETH_ERR("Bad parameter\n");
TETH_DBG_FUNC_EXIT();
return -EINVAL;
}
params->usb_notify_cb = teth_bridge_ipa_cb;
params->private_data = NULL;
params->skip_ep_cfg = true;
/* Build dependency graph */
res = ipa_rm_add_dependency(IPA_RM_RESOURCE_USB_PROD,
IPA_RM_RESOURCE_Q6_CONS);
if (res < 0 && res != -EINPROGRESS) {
TETH_ERR("ipa_rm_add_dependency() failed.\n");
goto bail;
}
res = ipa_rm_add_dependency(IPA_RM_RESOURCE_Q6_PROD,
IPA_RM_RESOURCE_USB_CONS);
if (res < 0 && res != -EINPROGRESS) {
ipa_rm_delete_dependency(IPA_RM_RESOURCE_USB_PROD,
IPA_RM_RESOURCE_Q6_CONS);
TETH_ERR("ipa_rm_add_dependency() failed.\n");
goto bail;
}
res = 0;
goto bail;
bail:
TETH_DBG_FUNC_EXIT();
return res;
}
/**
* ipa2_teth_bridge_disconnect() - Disconnect tethering bridge module
*/
int ipa2_teth_bridge_disconnect(enum ipa_client_type client)
{
TETH_DBG_FUNC_ENTRY();
ipa_rm_delete_dependency(IPA_RM_RESOURCE_USB_PROD,
IPA_RM_RESOURCE_Q6_CONS);
ipa_rm_delete_dependency(IPA_RM_RESOURCE_Q6_PROD,
IPA_RM_RESOURCE_USB_CONS);
TETH_DBG_FUNC_EXIT();
return 0;
}
/**
* ipa2_teth_bridge_connect() - Connect bridge for a tethered Rmnet / MBIM call
* @connect_params: Connection info
*
* Return codes: 0: success
* -EINVAL: invalid parameters
* -EPERM: Operation not permitted as the bridge is already
* connected
*/
int ipa2_teth_bridge_connect(struct teth_bridge_connect_params *connect_params)
{
return 0;
}
static long teth_bridge_ioctl(struct file *filp,
unsigned int cmd,
unsigned long arg)
{
IPAERR("No ioctls are supported !\n");
return -ENOIOCTLCMD;
}
static const struct file_operations teth_bridge_drv_fops = {
.owner = THIS_MODULE,
.unlocked_ioctl = teth_bridge_ioctl,
};
/**
* teth_bridge_driver_init() - Initialize tethering bridge driver
*
*/
int teth_bridge_driver_init(void)
{
int res;
TETH_DBG("Tethering bridge driver init\n");
teth_ctx = kzalloc(sizeof(*teth_ctx), GFP_KERNEL);
if (!teth_ctx) {
TETH_ERR("kzalloc err.\n");
return -ENOMEM;
}
teth_ctx->class = class_create(THIS_MODULE, TETH_BRIDGE_DRV_NAME);
res = alloc_chrdev_region(&teth_ctx->dev_num, 0, 1,
TETH_BRIDGE_DRV_NAME);
if (res) {
TETH_ERR("alloc_chrdev_region err.\n");
res = -ENODEV;
goto fail_alloc_chrdev_region;
}
teth_ctx->dev = device_create(teth_ctx->class, NULL, teth_ctx->dev_num,
teth_ctx, TETH_BRIDGE_DRV_NAME);
if (IS_ERR(teth_ctx->dev)) {
TETH_ERR(":device_create err.\n");
res = -ENODEV;
goto fail_device_create;
}
cdev_init(&teth_ctx->cdev, &teth_bridge_drv_fops);
teth_ctx->cdev.owner = THIS_MODULE;
teth_ctx->cdev.ops = &teth_bridge_drv_fops;
res = cdev_add(&teth_ctx->cdev, teth_ctx->dev_num, 1);
if (res) {
TETH_ERR(":cdev_add err=%d\n", -res);
res = -ENODEV;
goto fail_cdev_add;
}
TETH_DBG("Tethering bridge driver init OK\n");
return 0;
fail_cdev_add:
device_destroy(teth_ctx->class, teth_ctx->dev_num);
fail_device_create:
unregister_chrdev_region(teth_ctx->dev_num, 1);
fail_alloc_chrdev_region:
kfree(teth_ctx);
teth_ctx = NULL;
return res;
}
EXPORT_SYMBOL(teth_bridge_driver_init);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("Tethering bridge driver");

View File

@@ -0,0 +1,8 @@
obj-$(CONFIG_IPA3) += ipahal/
obj-$(CONFIG_IPA3) += ipat.o
ipat-y := ipa.o ipa_debugfs.o ipa_hdr.o ipa_flt.o ipa_rt.o ipa_dp.o ipa_client.o \
ipa_utils.o ipa_nat.o ipa_intf.o teth_bridge.o ipa_interrupts.o \
ipa_uc.o ipa_uc_wdi.o ipa_dma.o ipa_uc_mhi.o ipa_mhi.o ipa_uc_ntn.o
obj-$(CONFIG_RMNET_IPA3) += rmnet_ipa.o ipa_qmi_service_v01.o ipa_qmi_service.o rmnet_ipa_fd_ioctl.o

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,990 @@
/* Copyright (c) 2015-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/debugfs.h>
#include <linux/export.h>
#include <linux/delay.h>
#include <linux/kernel.h>
#include <linux/msm_ipa.h>
#include <linux/mutex.h>
#include <linux/ipa.h>
#include "linux/msm_gsi.h"
#include "ipa_i.h"
#define IPA_DMA_POLLING_MIN_SLEEP_RX 1010
#define IPA_DMA_POLLING_MAX_SLEEP_RX 1050
#define IPA_DMA_SYS_DESC_MAX_FIFO_SZ 0x7FF8
#define IPA_DMA_MAX_PKT_SZ 0xFFFF
#define IPA_DMA_MAX_PENDING_SYNC (IPA_SYS_DESC_FIFO_SZ / \
sizeof(struct sps_iovec) - 1)
#define IPA_DMA_MAX_PENDING_ASYNC (IPA_DMA_SYS_DESC_MAX_FIFO_SZ / \
sizeof(struct sps_iovec) - 1)
#define IPADMA_DRV_NAME "ipa_dma"
#define IPADMA_DBG(fmt, args...) \
do { \
pr_debug(IPADMA_DRV_NAME " %s:%d " fmt, \
__func__, __LINE__, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf(), \
IPADMA_DRV_NAME " %s:%d " fmt, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
IPADMA_DRV_NAME " %s:%d " fmt, ## args); \
} while (0)
#define IPADMA_DBG_LOW(fmt, args...) \
do { \
pr_debug(IPADMA_DRV_NAME " %s:%d " fmt, \
__func__, __LINE__, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
IPADMA_DRV_NAME " %s:%d " fmt, ## args); \
} while (0)
#define IPADMA_ERR(fmt, args...) \
do { \
pr_err(IPADMA_DRV_NAME " %s:%d " fmt, \
__func__, __LINE__, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf(), \
IPADMA_DRV_NAME " %s:%d " fmt, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
IPADMA_DRV_NAME " %s:%d " fmt, ## args); \
} while (0)
#define IPADMA_FUNC_ENTRY() \
IPADMA_DBG_LOW("ENTRY\n")
#define IPADMA_FUNC_EXIT() \
IPADMA_DBG_LOW("EXIT\n")
#ifdef CONFIG_DEBUG_FS
#define IPADMA_MAX_MSG_LEN 1024
static char dbg_buff[IPADMA_MAX_MSG_LEN];
static void ipa3_dma_debugfs_init(void);
static void ipa3_dma_debugfs_destroy(void);
#else
static void ipa3_dma_debugfs_init(void) {}
static void ipa3_dma_debugfs_destroy(void) {}
#endif
/**
* struct ipa3_dma_ctx -IPADMA driver context information
* @is_enabled:is ipa_dma enabled?
* @destroy_pending: destroy ipa_dma after handling all pending memcpy
* @ipa_dma_xfer_wrapper_cache: cache of ipa3_dma_xfer_wrapper structs
* @sync_lock: lock for synchronisation in sync_memcpy
* @async_lock: lock for synchronisation in async_memcpy
* @enable_lock: lock for is_enabled
* @pending_lock: lock for synchronize is_enable and pending_cnt
* @done: no pending works-ipadma can be destroyed
* @ipa_dma_sync_prod_hdl: handle of sync memcpy producer
* @ipa_dma_async_prod_hdl:handle of async memcpy producer
* @ipa_dma_sync_cons_hdl: handle of sync memcpy consumer
* @sync_memcpy_pending_cnt: number of pending sync memcopy operations
* @async_memcpy_pending_cnt: number of pending async memcopy operations
* @uc_memcpy_pending_cnt: number of pending uc memcopy operations
* @total_sync_memcpy: total number of sync memcpy (statistics)
* @total_async_memcpy: total number of async memcpy (statistics)
* @total_uc_memcpy: total number of uc memcpy (statistics)
*/
struct ipa3_dma_ctx {
bool is_enabled;
bool destroy_pending;
struct kmem_cache *ipa_dma_xfer_wrapper_cache;
struct mutex sync_lock;
spinlock_t async_lock;
struct mutex enable_lock;
spinlock_t pending_lock;
struct completion done;
u32 ipa_dma_sync_prod_hdl;
u32 ipa_dma_async_prod_hdl;
u32 ipa_dma_sync_cons_hdl;
u32 ipa_dma_async_cons_hdl;
atomic_t sync_memcpy_pending_cnt;
atomic_t async_memcpy_pending_cnt;
atomic_t uc_memcpy_pending_cnt;
atomic_t total_sync_memcpy;
atomic_t total_async_memcpy;
atomic_t total_uc_memcpy;
};
static struct ipa3_dma_ctx *ipa3_dma_ctx;
/**
* ipa3_dma_init() -Initialize IPADMA.
*
* This function initialize all IPADMA internal data and connect in dma:
* MEMCPY_DMA_SYNC_PROD ->MEMCPY_DMA_SYNC_CONS
* MEMCPY_DMA_ASYNC_PROD->MEMCPY_DMA_SYNC_CONS
*
* Return codes: 0: success
* -EFAULT: IPADMA is already initialized
* -EINVAL: IPA driver is not initialized
* -ENOMEM: allocating memory error
* -EPERM: pipe connection failed
*/
int ipa3_dma_init(void)
{
struct ipa3_dma_ctx *ipa_dma_ctx_t;
struct ipa_sys_connect_params sys_in;
int res = 0;
IPADMA_FUNC_ENTRY();
if (ipa3_dma_ctx) {
IPADMA_ERR("Already initialized.\n");
return -EFAULT;
}
if (!ipa3_is_ready()) {
IPADMA_ERR("IPA is not ready yet\n");
return -EINVAL;
}
ipa_dma_ctx_t = kzalloc(sizeof(*(ipa3_dma_ctx)), GFP_KERNEL);
if (!ipa_dma_ctx_t) {
IPADMA_ERR("kzalloc error.\n");
return -ENOMEM;
}
ipa_dma_ctx_t->ipa_dma_xfer_wrapper_cache =
kmem_cache_create("IPA DMA XFER WRAPPER",
sizeof(struct ipa3_dma_xfer_wrapper), 0, 0, NULL);
if (!ipa_dma_ctx_t->ipa_dma_xfer_wrapper_cache) {
IPAERR(":failed to create ipa dma xfer wrapper cache.\n");
res = -ENOMEM;
goto fail_mem_ctrl;
}
mutex_init(&ipa_dma_ctx_t->enable_lock);
spin_lock_init(&ipa_dma_ctx_t->async_lock);
mutex_init(&ipa_dma_ctx_t->sync_lock);
spin_lock_init(&ipa_dma_ctx_t->pending_lock);
init_completion(&ipa_dma_ctx_t->done);
ipa_dma_ctx_t->is_enabled = false;
ipa_dma_ctx_t->destroy_pending = false;
atomic_set(&ipa_dma_ctx_t->async_memcpy_pending_cnt, 0);
atomic_set(&ipa_dma_ctx_t->sync_memcpy_pending_cnt, 0);
atomic_set(&ipa_dma_ctx_t->uc_memcpy_pending_cnt, 0);
atomic_set(&ipa_dma_ctx_t->total_async_memcpy, 0);
atomic_set(&ipa_dma_ctx_t->total_sync_memcpy, 0);
atomic_set(&ipa_dma_ctx_t->total_uc_memcpy, 0);
/* IPADMA SYNC PROD-source for sync memcpy */
memset(&sys_in, 0, sizeof(struct ipa_sys_connect_params));
sys_in.client = IPA_CLIENT_MEMCPY_DMA_SYNC_PROD;
sys_in.desc_fifo_sz = IPA_SYS_DESC_FIFO_SZ;
sys_in.ipa_ep_cfg.mode.mode = IPA_DMA;
sys_in.ipa_ep_cfg.mode.dst = IPA_CLIENT_MEMCPY_DMA_SYNC_CONS;
sys_in.skip_ep_cfg = false;
if (ipa3_setup_sys_pipe(&sys_in,
&ipa_dma_ctx_t->ipa_dma_sync_prod_hdl)) {
IPADMA_ERR(":setup sync prod pipe failed\n");
res = -EPERM;
goto fail_sync_prod;
}
/* IPADMA SYNC CONS-destination for sync memcpy */
memset(&sys_in, 0, sizeof(struct ipa_sys_connect_params));
sys_in.client = IPA_CLIENT_MEMCPY_DMA_SYNC_CONS;
sys_in.desc_fifo_sz = IPA_SYS_DESC_FIFO_SZ;
sys_in.skip_ep_cfg = false;
sys_in.ipa_ep_cfg.mode.mode = IPA_BASIC;
sys_in.notify = NULL;
sys_in.priv = NULL;
if (ipa3_setup_sys_pipe(&sys_in,
&ipa_dma_ctx_t->ipa_dma_sync_cons_hdl)) {
IPADMA_ERR(":setup sync cons pipe failed.\n");
res = -EPERM;
goto fail_sync_cons;
}
IPADMA_DBG("SYNC MEMCPY pipes are connected\n");
/* IPADMA ASYNC PROD-source for sync memcpy */
memset(&sys_in, 0, sizeof(struct ipa_sys_connect_params));
sys_in.client = IPA_CLIENT_MEMCPY_DMA_ASYNC_PROD;
sys_in.desc_fifo_sz = IPA_DMA_SYS_DESC_MAX_FIFO_SZ;
sys_in.ipa_ep_cfg.mode.mode = IPA_DMA;
sys_in.ipa_ep_cfg.mode.dst = IPA_CLIENT_MEMCPY_DMA_ASYNC_CONS;
sys_in.skip_ep_cfg = false;
sys_in.notify = NULL;
if (ipa3_setup_sys_pipe(&sys_in,
&ipa_dma_ctx_t->ipa_dma_async_prod_hdl)) {
IPADMA_ERR(":setup async prod pipe failed.\n");
res = -EPERM;
goto fail_async_prod;
}
/* IPADMA ASYNC CONS-destination for sync memcpy */
memset(&sys_in, 0, sizeof(struct ipa_sys_connect_params));
sys_in.client = IPA_CLIENT_MEMCPY_DMA_ASYNC_CONS;
sys_in.desc_fifo_sz = IPA_DMA_SYS_DESC_MAX_FIFO_SZ;
sys_in.skip_ep_cfg = false;
sys_in.ipa_ep_cfg.mode.mode = IPA_BASIC;
sys_in.notify = ipa3_dma_async_memcpy_notify_cb;
sys_in.priv = NULL;
if (ipa3_setup_sys_pipe(&sys_in,
&ipa_dma_ctx_t->ipa_dma_async_cons_hdl)) {
IPADMA_ERR(":setup async cons pipe failed.\n");
res = -EPERM;
goto fail_async_cons;
}
ipa3_dma_debugfs_init();
ipa3_dma_ctx = ipa_dma_ctx_t;
IPADMA_DBG("ASYNC MEMCPY pipes are connected\n");
IPADMA_FUNC_EXIT();
return res;
fail_async_cons:
ipa3_teardown_sys_pipe(ipa_dma_ctx_t->ipa_dma_async_prod_hdl);
fail_async_prod:
ipa3_teardown_sys_pipe(ipa_dma_ctx_t->ipa_dma_sync_cons_hdl);
fail_sync_cons:
ipa3_teardown_sys_pipe(ipa_dma_ctx_t->ipa_dma_sync_prod_hdl);
fail_sync_prod:
kmem_cache_destroy(ipa_dma_ctx_t->ipa_dma_xfer_wrapper_cache);
fail_mem_ctrl:
kfree(ipa_dma_ctx_t);
ipa3_dma_ctx = NULL;
return res;
}
/**
* ipa3_dma_enable() -Vote for IPA clocks.
*
*Return codes: 0: success
* -EINVAL: IPADMA is not initialized
* -EPERM: Operation not permitted as ipa_dma is already
* enabled
*/
int ipa3_dma_enable(void)
{
IPADMA_FUNC_ENTRY();
if (ipa3_dma_ctx == NULL) {
IPADMA_ERR("IPADMA isn't initialized, can't enable\n");
return -EPERM;
}
mutex_lock(&ipa3_dma_ctx->enable_lock);
if (ipa3_dma_ctx->is_enabled) {
IPADMA_ERR("Already enabled.\n");
mutex_unlock(&ipa3_dma_ctx->enable_lock);
return -EPERM;
}
IPA_ACTIVE_CLIENTS_INC_SPECIAL("DMA");
ipa3_dma_ctx->is_enabled = true;
mutex_unlock(&ipa3_dma_ctx->enable_lock);
IPADMA_FUNC_EXIT();
return 0;
}
static bool ipa3_dma_work_pending(void)
{
if (atomic_read(&ipa3_dma_ctx->sync_memcpy_pending_cnt)) {
IPADMA_DBG("pending sync\n");
return true;
}
if (atomic_read(&ipa3_dma_ctx->async_memcpy_pending_cnt)) {
IPADMA_DBG("pending async\n");
return true;
}
if (atomic_read(&ipa3_dma_ctx->uc_memcpy_pending_cnt)) {
IPADMA_DBG("pending uc\n");
return true;
}
IPADMA_DBG_LOW("no pending work\n");
return false;
}
/**
* ipa3_dma_disable()- Unvote for IPA clocks.
*
* enter to power save mode.
*
* Return codes: 0: success
* -EINVAL: IPADMA is not initialized
* -EPERM: Operation not permitted as ipa_dma is already
* diabled
* -EFAULT: can not disable ipa_dma as there are pending
* memcopy works
*/
int ipa3_dma_disable(void)
{
unsigned long flags;
IPADMA_FUNC_ENTRY();
if (ipa3_dma_ctx == NULL) {
IPADMA_ERR("IPADMA isn't initialized, can't disable\n");
return -EPERM;
}
mutex_lock(&ipa3_dma_ctx->enable_lock);
spin_lock_irqsave(&ipa3_dma_ctx->pending_lock, flags);
if (!ipa3_dma_ctx->is_enabled) {
IPADMA_ERR("Already disabled.\n");
spin_unlock_irqrestore(&ipa3_dma_ctx->pending_lock, flags);
mutex_unlock(&ipa3_dma_ctx->enable_lock);
return -EPERM;
}
if (ipa3_dma_work_pending()) {
IPADMA_ERR("There is pending work, can't disable.\n");
spin_unlock_irqrestore(&ipa3_dma_ctx->pending_lock, flags);
mutex_unlock(&ipa3_dma_ctx->enable_lock);
return -EFAULT;
}
ipa3_dma_ctx->is_enabled = false;
spin_unlock_irqrestore(&ipa3_dma_ctx->pending_lock, flags);
IPA_ACTIVE_CLIENTS_DEC_SPECIAL("DMA");
mutex_unlock(&ipa3_dma_ctx->enable_lock);
IPADMA_FUNC_EXIT();
return 0;
}
/**
* ipa3_dma_sync_memcpy()- Perform synchronous memcpy using IPA.
*
* @dest: physical address to store the copied data.
* @src: physical address of the source data to copy.
* @len: number of bytes to copy.
*
* Return codes: 0: success
* -EINVAL: invalid params
* -EPERM: operation not permitted as ipa_dma isn't enable or
* initialized
* -SPS_ERROR: on sps faliures
* -EFAULT: other
*/
int ipa3_dma_sync_memcpy(u64 dest, u64 src, int len)
{
int ep_idx;
int res;
int i = 0;
struct ipa3_sys_context *cons_sys;
struct ipa3_sys_context *prod_sys;
struct sps_iovec iov;
struct ipa3_dma_xfer_wrapper *xfer_descr = NULL;
struct ipa3_dma_xfer_wrapper *head_descr = NULL;
struct gsi_xfer_elem xfer_elem;
struct gsi_chan_xfer_notify gsi_notify;
unsigned long flags;
bool stop_polling = false;
IPADMA_FUNC_ENTRY();
IPADMA_DBG_LOW("dest = 0x%llx, src = 0x%llx, len = %d\n",
dest, src, len);
if (ipa3_dma_ctx == NULL) {
IPADMA_ERR("IPADMA isn't initialized, can't memcpy\n");
return -EPERM;
}
if ((max(src, dest) - min(src, dest)) < len) {
IPADMA_ERR("invalid addresses - overlapping buffers\n");
return -EINVAL;
}
if (len > IPA_DMA_MAX_PKT_SZ || len <= 0) {
IPADMA_ERR("invalid len, %d\n", len);
return -EINVAL;
}
if (ipa3_ctx->transport_prototype != IPA_TRANSPORT_TYPE_GSI) {
if (((u32)src != src) || ((u32)dest != dest)) {
IPADMA_ERR("Bad addr, only 32b addr supported for BAM");
return -EINVAL;
}
}
spin_lock_irqsave(&ipa3_dma_ctx->pending_lock, flags);
if (!ipa3_dma_ctx->is_enabled) {
IPADMA_ERR("can't memcpy, IPADMA isn't enabled\n");
spin_unlock_irqrestore(&ipa3_dma_ctx->pending_lock, flags);
return -EPERM;
}
atomic_inc(&ipa3_dma_ctx->sync_memcpy_pending_cnt);
spin_unlock_irqrestore(&ipa3_dma_ctx->pending_lock, flags);
if (ipa3_ctx->transport_prototype == IPA_TRANSPORT_TYPE_SPS) {
if (atomic_read(&ipa3_dma_ctx->sync_memcpy_pending_cnt) >=
IPA_DMA_MAX_PENDING_SYNC) {
atomic_dec(&ipa3_dma_ctx->sync_memcpy_pending_cnt);
IPADMA_ERR("Reached pending requests limit\n");
return -EFAULT;
}
}
ep_idx = ipa3_get_ep_mapping(IPA_CLIENT_MEMCPY_DMA_SYNC_CONS);
if (-1 == ep_idx) {
IPADMA_ERR("Client %u is not mapped\n",
IPA_CLIENT_MEMCPY_DMA_SYNC_CONS);
return -EFAULT;
}
cons_sys = ipa3_ctx->ep[ep_idx].sys;
ep_idx = ipa3_get_ep_mapping(IPA_CLIENT_MEMCPY_DMA_SYNC_PROD);
if (-1 == ep_idx) {
IPADMA_ERR("Client %u is not mapped\n",
IPA_CLIENT_MEMCPY_DMA_SYNC_PROD);
return -EFAULT;
}
prod_sys = ipa3_ctx->ep[ep_idx].sys;
xfer_descr = kmem_cache_zalloc(ipa3_dma_ctx->ipa_dma_xfer_wrapper_cache,
GFP_KERNEL);
if (!xfer_descr) {
IPADMA_ERR("failed to alloc xfer descr wrapper\n");
res = -ENOMEM;
goto fail_mem_alloc;
}
xfer_descr->phys_addr_dest = dest;
xfer_descr->phys_addr_src = src;
xfer_descr->len = len;
init_completion(&xfer_descr->xfer_done);
mutex_lock(&ipa3_dma_ctx->sync_lock);
list_add_tail(&xfer_descr->link, &cons_sys->head_desc_list);
cons_sys->len++;
if (ipa3_ctx->transport_prototype == IPA_TRANSPORT_TYPE_GSI) {
xfer_elem.addr = dest;
xfer_elem.len = len;
xfer_elem.type = GSI_XFER_ELEM_DATA;
xfer_elem.flags = GSI_XFER_FLAG_EOT;
xfer_elem.xfer_user_data = xfer_descr;
res = gsi_queue_xfer(cons_sys->ep->gsi_chan_hdl, 1,
&xfer_elem, true);
if (res) {
IPADMA_ERR(
"Failed: gsi_queue_xfer dest descr res:%d\n",
res);
goto fail_send;
}
xfer_elem.addr = src;
xfer_elem.len = len;
xfer_elem.type = GSI_XFER_ELEM_DATA;
xfer_elem.flags = GSI_XFER_FLAG_EOT;
xfer_elem.xfer_user_data = NULL;
res = gsi_queue_xfer(prod_sys->ep->gsi_chan_hdl, 1,
&xfer_elem, true);
if (res) {
IPADMA_ERR(
"Failed: gsi_queue_xfer src descr res:%d\n",
res);
BUG();
}
} else {
res = sps_transfer_one(cons_sys->ep->ep_hdl, dest, len,
NULL, 0);
if (res) {
IPADMA_ERR("Failed: sps_transfer_one on dest descr\n");
goto fail_send;
}
res = sps_transfer_one(prod_sys->ep->ep_hdl, src, len,
NULL, SPS_IOVEC_FLAG_EOT);
if (res) {
IPADMA_ERR("Failed: sps_transfer_one on src descr\n");
BUG();
}
}
head_descr = list_first_entry(&cons_sys->head_desc_list,
struct ipa3_dma_xfer_wrapper, link);
/* in case we are not the head of the list, wait for head to wake us */
if (xfer_descr != head_descr) {
mutex_unlock(&ipa3_dma_ctx->sync_lock);
wait_for_completion(&xfer_descr->xfer_done);
mutex_lock(&ipa3_dma_ctx->sync_lock);
head_descr = list_first_entry(&cons_sys->head_desc_list,
struct ipa3_dma_xfer_wrapper, link);
BUG_ON(xfer_descr != head_descr);
}
mutex_unlock(&ipa3_dma_ctx->sync_lock);
do {
/* wait for transfer to complete */
if (ipa3_ctx->transport_prototype == IPA_TRANSPORT_TYPE_GSI) {
res = gsi_poll_channel(cons_sys->ep->gsi_chan_hdl,
&gsi_notify);
if (res == GSI_STATUS_SUCCESS)
stop_polling = true;
else if (res != GSI_STATUS_POLL_EMPTY)
IPADMA_ERR(
"Failed: gsi_poll_chanel, returned %d loop#:%d\n",
res, i);
} else {
res = sps_get_iovec(cons_sys->ep->ep_hdl, &iov);
if (res)
IPADMA_ERR(
"Failed: get_iovec, returned %d loop#:%d\n",
res, i);
if (iov.addr != 0)
stop_polling = true;
}
usleep_range(IPA_DMA_POLLING_MIN_SLEEP_RX,
IPA_DMA_POLLING_MAX_SLEEP_RX);
i++;
} while (!stop_polling);
if (ipa3_ctx->transport_prototype == IPA_TRANSPORT_TYPE_GSI) {
BUG_ON(len != gsi_notify.bytes_xfered);
BUG_ON(dest != ((struct ipa3_dma_xfer_wrapper *)
(gsi_notify.xfer_user_data))->phys_addr_dest);
} else {
BUG_ON(dest != iov.addr);
BUG_ON(len != iov.size);
}
mutex_lock(&ipa3_dma_ctx->sync_lock);
list_del(&head_descr->link);
cons_sys->len--;
kmem_cache_free(ipa3_dma_ctx->ipa_dma_xfer_wrapper_cache, xfer_descr);
/* wake the head of the list */
if (!list_empty(&cons_sys->head_desc_list)) {
head_descr = list_first_entry(&cons_sys->head_desc_list,
struct ipa3_dma_xfer_wrapper, link);
complete(&head_descr->xfer_done);
}
mutex_unlock(&ipa3_dma_ctx->sync_lock);
atomic_inc(&ipa3_dma_ctx->total_sync_memcpy);
atomic_dec(&ipa3_dma_ctx->sync_memcpy_pending_cnt);
if (ipa3_dma_ctx->destroy_pending && !ipa3_dma_work_pending())
complete(&ipa3_dma_ctx->done);
IPADMA_FUNC_EXIT();
return res;
fail_send:
list_del(&xfer_descr->link);
cons_sys->len--;
mutex_unlock(&ipa3_dma_ctx->sync_lock);
kmem_cache_free(ipa3_dma_ctx->ipa_dma_xfer_wrapper_cache, xfer_descr);
fail_mem_alloc:
atomic_dec(&ipa3_dma_ctx->sync_memcpy_pending_cnt);
if (ipa3_dma_ctx->destroy_pending && !ipa3_dma_work_pending())
complete(&ipa3_dma_ctx->done);
return res;
}
/**
* ipa3_dma_async_memcpy()- Perform asynchronous memcpy using IPA.
*
* @dest: physical address to store the copied data.
* @src: physical address of the source data to copy.
* @len: number of bytes to copy.
* @user_cb: callback function to notify the client when the copy was done.
* @user_param: cookie for user_cb.
*
* Return codes: 0: success
* -EINVAL: invalid params
* -EPERM: operation not permitted as ipa_dma isn't enable or
* initialized
* -SPS_ERROR: on sps faliures
* -EFAULT: descr fifo is full.
*/
int ipa3_dma_async_memcpy(u64 dest, u64 src, int len,
void (*user_cb)(void *user1), void *user_param)
{
int ep_idx;
int res = 0;
struct ipa3_dma_xfer_wrapper *xfer_descr = NULL;
struct ipa3_sys_context *prod_sys;
struct ipa3_sys_context *cons_sys;
struct gsi_xfer_elem xfer_elem_cons, xfer_elem_prod;
unsigned long flags;
IPADMA_FUNC_ENTRY();
IPADMA_DBG_LOW("dest = 0x%llx, src = 0x%llx, len = %d\n",
dest, src, len);
if (ipa3_dma_ctx == NULL) {
IPADMA_ERR("IPADMA isn't initialized, can't memcpy\n");
return -EPERM;
}
if ((max(src, dest) - min(src, dest)) < len) {
IPADMA_ERR("invalid addresses - overlapping buffers\n");
return -EINVAL;
}
if (len > IPA_DMA_MAX_PKT_SZ || len <= 0) {
IPADMA_ERR("invalid len, %d\n", len);
return -EINVAL;
}
if (ipa3_ctx->transport_prototype != IPA_TRANSPORT_TYPE_GSI) {
if (((u32)src != src) || ((u32)dest != dest)) {
IPADMA_ERR(
"Bad addr - only 32b addr supported for BAM");
return -EINVAL;
}
}
if (!user_cb) {
IPADMA_ERR("null pointer: user_cb\n");
return -EINVAL;
}
spin_lock_irqsave(&ipa3_dma_ctx->pending_lock, flags);
if (!ipa3_dma_ctx->is_enabled) {
IPADMA_ERR("can't memcpy, IPA_DMA isn't enabled\n");
spin_unlock_irqrestore(&ipa3_dma_ctx->pending_lock, flags);
return -EPERM;
}
atomic_inc(&ipa3_dma_ctx->async_memcpy_pending_cnt);
spin_unlock_irqrestore(&ipa3_dma_ctx->pending_lock, flags);
if (ipa3_ctx->transport_prototype == IPA_TRANSPORT_TYPE_SPS) {
if (atomic_read(&ipa3_dma_ctx->async_memcpy_pending_cnt) >=
IPA_DMA_MAX_PENDING_ASYNC) {
atomic_dec(&ipa3_dma_ctx->async_memcpy_pending_cnt);
IPADMA_ERR("Reached pending requests limit\n");
return -EFAULT;
}
}
ep_idx = ipa3_get_ep_mapping(IPA_CLIENT_MEMCPY_DMA_ASYNC_CONS);
if (-1 == ep_idx) {
IPADMA_ERR("Client %u is not mapped\n",
IPA_CLIENT_MEMCPY_DMA_ASYNC_CONS);
return -EFAULT;
}
cons_sys = ipa3_ctx->ep[ep_idx].sys;
ep_idx = ipa3_get_ep_mapping(IPA_CLIENT_MEMCPY_DMA_ASYNC_PROD);
if (-1 == ep_idx) {
IPADMA_ERR("Client %u is not mapped\n",
IPA_CLIENT_MEMCPY_DMA_SYNC_PROD);
return -EFAULT;
}
prod_sys = ipa3_ctx->ep[ep_idx].sys;
xfer_descr = kmem_cache_zalloc(ipa3_dma_ctx->ipa_dma_xfer_wrapper_cache,
GFP_KERNEL);
if (!xfer_descr) {
IPADMA_ERR("failed to alloc xfrer descr wrapper\n");
res = -ENOMEM;
goto fail_mem_alloc;
}
xfer_descr->phys_addr_dest = dest;
xfer_descr->phys_addr_src = src;
xfer_descr->len = len;
xfer_descr->callback = user_cb;
xfer_descr->user1 = user_param;
spin_lock_irqsave(&ipa3_dma_ctx->async_lock, flags);
list_add_tail(&xfer_descr->link, &cons_sys->head_desc_list);
cons_sys->len++;
if (ipa3_ctx->transport_prototype == IPA_TRANSPORT_TYPE_GSI) {
xfer_elem_cons.addr = dest;
xfer_elem_cons.len = len;
xfer_elem_cons.type = GSI_XFER_ELEM_DATA;
xfer_elem_cons.flags = GSI_XFER_FLAG_EOT;
xfer_elem_cons.xfer_user_data = xfer_descr;
xfer_elem_prod.addr = src;
xfer_elem_prod.len = len;
xfer_elem_prod.type = GSI_XFER_ELEM_DATA;
xfer_elem_prod.flags = GSI_XFER_FLAG_EOT;
xfer_elem_prod.xfer_user_data = NULL;
res = gsi_queue_xfer(cons_sys->ep->gsi_chan_hdl, 1,
&xfer_elem_cons, true);
if (res) {
IPADMA_ERR(
"Failed: gsi_queue_xfer on dest descr res: %d\n",
res);
goto fail_send;
}
res = gsi_queue_xfer(prod_sys->ep->gsi_chan_hdl, 1,
&xfer_elem_prod, true);
if (res) {
IPADMA_ERR(
"Failed: gsi_queue_xfer on src descr res: %d\n",
res);
BUG();
goto fail_send;
}
} else {
res = sps_transfer_one(cons_sys->ep->ep_hdl, dest, len,
xfer_descr, 0);
if (res) {
IPADMA_ERR("Failed: sps_transfer_one on dest descr\n");
goto fail_send;
}
res = sps_transfer_one(prod_sys->ep->ep_hdl, src, len,
NULL, SPS_IOVEC_FLAG_EOT);
if (res) {
IPADMA_ERR("Failed: sps_transfer_one on src descr\n");
BUG();
goto fail_send;
}
}
spin_unlock_irqrestore(&ipa3_dma_ctx->async_lock, flags);
IPADMA_FUNC_EXIT();
return res;
fail_send:
list_del(&xfer_descr->link);
spin_unlock_irqrestore(&ipa3_dma_ctx->async_lock, flags);
kmem_cache_free(ipa3_dma_ctx->ipa_dma_xfer_wrapper_cache, xfer_descr);
fail_mem_alloc:
atomic_dec(&ipa3_dma_ctx->async_memcpy_pending_cnt);
if (ipa3_dma_ctx->destroy_pending && !ipa3_dma_work_pending())
complete(&ipa3_dma_ctx->done);
return res;
}
/**
* ipa3_dma_uc_memcpy() - Perform a memcpy action using IPA uC
* @dest: physical address to store the copied data.
* @src: physical address of the source data to copy.
* @len: number of bytes to copy.
*
* Return codes: 0: success
* -EINVAL: invalid params
* -EPERM: operation not permitted as ipa_dma isn't enable or
* initialized
* -EBADF: IPA uC is not loaded
*/
int ipa3_dma_uc_memcpy(phys_addr_t dest, phys_addr_t src, int len)
{
int res;
unsigned long flags;
IPADMA_FUNC_ENTRY();
if (ipa3_dma_ctx == NULL) {
IPADMA_ERR("IPADMA isn't initialized, can't memcpy\n");
return -EPERM;
}
if ((max(src, dest) - min(src, dest)) < len) {
IPADMA_ERR("invalid addresses - overlapping buffers\n");
return -EINVAL;
}
if (len > IPA_DMA_MAX_PKT_SZ || len <= 0) {
IPADMA_ERR("invalid len, %d\n", len);
return -EINVAL;
}
spin_lock_irqsave(&ipa3_dma_ctx->pending_lock, flags);
if (!ipa3_dma_ctx->is_enabled) {
IPADMA_ERR("can't memcpy, IPADMA isn't enabled\n");
spin_unlock_irqrestore(&ipa3_dma_ctx->pending_lock, flags);
return -EPERM;
}
atomic_inc(&ipa3_dma_ctx->uc_memcpy_pending_cnt);
spin_unlock_irqrestore(&ipa3_dma_ctx->pending_lock, flags);
res = ipa3_uc_memcpy(dest, src, len);
if (res) {
IPADMA_ERR("ipa3_uc_memcpy failed %d\n", res);
goto dec_and_exit;
}
atomic_inc(&ipa3_dma_ctx->total_uc_memcpy);
res = 0;
dec_and_exit:
atomic_dec(&ipa3_dma_ctx->uc_memcpy_pending_cnt);
if (ipa3_dma_ctx->destroy_pending && !ipa3_dma_work_pending())
complete(&ipa3_dma_ctx->done);
IPADMA_FUNC_EXIT();
return res;
}
/**
* ipa3_dma_destroy() -teardown IPADMA pipes and release ipadma.
*
* this is a blocking function, returns just after destroying IPADMA.
*/
void ipa3_dma_destroy(void)
{
int res = 0;
IPADMA_FUNC_ENTRY();
if (!ipa3_dma_ctx) {
IPADMA_ERR("IPADMA isn't initialized\n");
return;
}
if (ipa3_dma_work_pending()) {
ipa3_dma_ctx->destroy_pending = true;
IPADMA_DBG("There are pending memcpy, wait for completion\n");
wait_for_completion(&ipa3_dma_ctx->done);
}
res = ipa3_teardown_sys_pipe(ipa3_dma_ctx->ipa_dma_async_cons_hdl);
if (res)
IPADMA_ERR("teardown IPADMA ASYNC CONS failed\n");
ipa3_dma_ctx->ipa_dma_async_cons_hdl = 0;
res = ipa3_teardown_sys_pipe(ipa3_dma_ctx->ipa_dma_sync_cons_hdl);
if (res)
IPADMA_ERR("teardown IPADMA SYNC CONS failed\n");
ipa3_dma_ctx->ipa_dma_sync_cons_hdl = 0;
res = ipa3_teardown_sys_pipe(ipa3_dma_ctx->ipa_dma_async_prod_hdl);
if (res)
IPADMA_ERR("teardown IPADMA ASYNC PROD failed\n");
ipa3_dma_ctx->ipa_dma_async_prod_hdl = 0;
res = ipa3_teardown_sys_pipe(ipa3_dma_ctx->ipa_dma_sync_prod_hdl);
if (res)
IPADMA_ERR("teardown IPADMA SYNC PROD failed\n");
ipa3_dma_ctx->ipa_dma_sync_prod_hdl = 0;
ipa3_dma_debugfs_destroy();
kmem_cache_destroy(ipa3_dma_ctx->ipa_dma_xfer_wrapper_cache);
kfree(ipa3_dma_ctx);
ipa3_dma_ctx = NULL;
IPADMA_FUNC_EXIT();
}
/**
* ipa3_dma_async_memcpy_notify_cb() -Callback function which will be called by
* IPA driver after getting notify from SPS driver or poll mode on Rx operation
* is completed (data was written to dest descriptor on async_cons ep).
*
* @priv -not in use.
* @evt - event name - IPA_RECIVE.
* @data -the ipa_mem_buffer.
*/
void ipa3_dma_async_memcpy_notify_cb(void *priv
, enum ipa_dp_evt_type evt, unsigned long data)
{
int ep_idx = 0;
struct ipa3_dma_xfer_wrapper *xfer_descr_expected;
struct ipa3_sys_context *sys;
unsigned long flags;
struct ipa_mem_buffer *mem_info;
IPADMA_FUNC_ENTRY();
mem_info = (struct ipa_mem_buffer *)data;
ep_idx = ipa3_get_ep_mapping(IPA_CLIENT_MEMCPY_DMA_ASYNC_CONS);
sys = ipa3_ctx->ep[ep_idx].sys;
spin_lock_irqsave(&ipa3_dma_ctx->async_lock, flags);
xfer_descr_expected = list_first_entry(&sys->head_desc_list,
struct ipa3_dma_xfer_wrapper, link);
list_del(&xfer_descr_expected->link);
sys->len--;
spin_unlock_irqrestore(&ipa3_dma_ctx->async_lock, flags);
if (ipa3_ctx->transport_prototype != IPA_TRANSPORT_TYPE_GSI) {
BUG_ON(xfer_descr_expected->phys_addr_dest !=
mem_info->phys_base);
BUG_ON(xfer_descr_expected->len != mem_info->size);
}
atomic_inc(&ipa3_dma_ctx->total_async_memcpy);
atomic_dec(&ipa3_dma_ctx->async_memcpy_pending_cnt);
xfer_descr_expected->callback(xfer_descr_expected->user1);
kmem_cache_free(ipa3_dma_ctx->ipa_dma_xfer_wrapper_cache,
xfer_descr_expected);
if (ipa3_dma_ctx->destroy_pending && !ipa3_dma_work_pending())
complete(&ipa3_dma_ctx->done);
IPADMA_FUNC_EXIT();
}
#ifdef CONFIG_DEBUG_FS
static struct dentry *dent;
static struct dentry *dfile_info;
static ssize_t ipa3_dma_debugfs_read(struct file *file, char __user *ubuf,
size_t count, loff_t *ppos)
{
int nbytes = 0;
if (!ipa3_dma_ctx) {
nbytes += scnprintf(&dbg_buff[nbytes],
IPADMA_MAX_MSG_LEN - nbytes,
"Not initialized\n");
} else {
nbytes += scnprintf(&dbg_buff[nbytes],
IPADMA_MAX_MSG_LEN - nbytes,
"Status:\n IPADMA is %s\n",
(ipa3_dma_ctx->is_enabled) ? "Enabled" : "Disabled");
nbytes += scnprintf(&dbg_buff[nbytes],
IPADMA_MAX_MSG_LEN - nbytes,
"Statistics:\n total sync memcpy: %d\n ",
atomic_read(&ipa3_dma_ctx->total_sync_memcpy));
nbytes += scnprintf(&dbg_buff[nbytes],
IPADMA_MAX_MSG_LEN - nbytes,
"total async memcpy: %d\n ",
atomic_read(&ipa3_dma_ctx->total_async_memcpy));
nbytes += scnprintf(&dbg_buff[nbytes],
IPADMA_MAX_MSG_LEN - nbytes,
"pending sync memcpy jobs: %d\n ",
atomic_read(&ipa3_dma_ctx->sync_memcpy_pending_cnt));
nbytes += scnprintf(&dbg_buff[nbytes],
IPADMA_MAX_MSG_LEN - nbytes,
"pending async memcpy jobs: %d\n",
atomic_read(&ipa3_dma_ctx->async_memcpy_pending_cnt));
nbytes += scnprintf(&dbg_buff[nbytes],
IPADMA_MAX_MSG_LEN - nbytes,
"pending uc memcpy jobs: %d\n",
atomic_read(&ipa3_dma_ctx->uc_memcpy_pending_cnt));
}
return simple_read_from_buffer(ubuf, count, ppos, dbg_buff, nbytes);
}
static ssize_t ipa3_dma_debugfs_reset_statistics(struct file *file,
const char __user *ubuf,
size_t count,
loff_t *ppos)
{
unsigned long missing;
s8 in_num = 0;
if (sizeof(dbg_buff) < count + 1)
return -EFAULT;
missing = copy_from_user(dbg_buff, ubuf, count);
if (missing)
return -EFAULT;
dbg_buff[count] = '\0';
if (kstrtos8(dbg_buff, 0, &in_num))
return -EFAULT;
switch (in_num) {
case 0:
if (ipa3_dma_work_pending())
IPADMA_ERR("Note, there are pending memcpy\n");
atomic_set(&ipa3_dma_ctx->total_async_memcpy, 0);
atomic_set(&ipa3_dma_ctx->total_sync_memcpy, 0);
break;
default:
IPADMA_ERR("invalid argument: To reset statistics echo 0\n");
break;
}
return count;
}
const struct file_operations ipa3_ipadma_stats_ops = {
.read = ipa3_dma_debugfs_read,
.write = ipa3_dma_debugfs_reset_statistics,
};
static void ipa3_dma_debugfs_init(void)
{
const mode_t read_write_mode = S_IRUSR | S_IRGRP | S_IROTH |
S_IWUSR | S_IWGRP | S_IWOTH;
dent = debugfs_create_dir("ipa_dma", 0);
if (IS_ERR(dent)) {
IPADMA_ERR("fail to create folder ipa_dma\n");
return;
}
dfile_info =
debugfs_create_file("info", read_write_mode, dent,
0, &ipa3_ipadma_stats_ops);
if (!dfile_info || IS_ERR(dfile_info)) {
IPADMA_ERR("fail to create file stats\n");
goto fail;
}
return;
fail:
debugfs_remove_recursive(dent);
}
static void ipa3_dma_debugfs_destroy(void)
{
debugfs_remove_recursive(dent);
}
#endif /* !CONFIG_DEBUG_FS */

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,44 @@
/* Copyright (c) 2012-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _IPA_HW_DEFS_H
#define _IPA_HW_DEFS_H
#include <linux/bitops.h>
/* This header defines various HW related data types */
#define IPA_A5_MUX_HDR_EXCP_FLAG_IP BIT(7)
#define IPA_A5_MUX_HDR_EXCP_FLAG_NAT BIT(6)
#define IPA_A5_MUX_HDR_EXCP_FLAG_SW_FLT BIT(5)
#define IPA_A5_MUX_HDR_EXCP_FLAG_TAG BIT(4)
#define IPA_A5_MUX_HDR_EXCP_FLAG_REPLICATED BIT(3)
#define IPA_A5_MUX_HDR_EXCP_FLAG_IHL BIT(2)
/**
* struct ipa3_a5_mux_hdr - A5 MUX header definition
* @interface_id: interface ID
* @src_pipe_index: source pipe index
* @flags: flags
* @metadata: metadata
*
* A5 MUX header is in BE, A5 runs in LE. This struct definition
* allows A5 SW to correctly parse the header
*/
struct ipa3_a5_mux_hdr {
u16 interface_id;
u8 src_pipe_index;
u8 flags;
u32 metadata;
};
#endif /* _IPA_HW_DEFS_H */

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,567 @@
/* Copyright (c) 2014-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/interrupt.h>
#include "ipa_i.h"
#define INTERRUPT_WORKQUEUE_NAME "ipa_interrupt_wq"
#define DIS_SUSPEND_INTERRUPT_TIMEOUT 5
#define IPA_IRQ_NUM_MAX 32
struct ipa3_interrupt_info {
ipa_irq_handler_t handler;
enum ipa_irq_type interrupt;
void *private_data;
bool deferred_flag;
};
struct ipa3_interrupt_work_wrap {
struct work_struct interrupt_work;
ipa_irq_handler_t handler;
enum ipa_irq_type interrupt;
void *private_data;
void *interrupt_data;
};
static struct ipa3_interrupt_info ipa_interrupt_to_cb[IPA_IRQ_NUM_MAX];
static struct workqueue_struct *ipa_interrupt_wq;
static u32 ipa_ee;
static void ipa3_tx_suspend_interrupt_wa(void);
static void ipa3_enable_tx_suspend_wa(struct work_struct *work);
static DECLARE_DELAYED_WORK(dwork_en_suspend_int,
ipa3_enable_tx_suspend_wa);
static spinlock_t suspend_wa_lock;
static void ipa3_process_interrupts(bool isr_context);
static int ipa3_irq_mapping[IPA_IRQ_MAX] = {
[IPA_UC_TX_CMD_Q_NOT_FULL_IRQ] = -1,
[IPA_UC_TO_PROC_ACK_Q_NOT_FULL_IRQ] = -1,
[IPA_BAD_SNOC_ACCESS_IRQ] = 0,
[IPA_EOT_COAL_IRQ] = -1,
[IPA_UC_IRQ_0] = 2,
[IPA_UC_IRQ_1] = 3,
[IPA_UC_IRQ_2] = 4,
[IPA_UC_IRQ_3] = 5,
[IPA_UC_IN_Q_NOT_EMPTY_IRQ] = 6,
[IPA_UC_RX_CMD_Q_NOT_FULL_IRQ] = 7,
[IPA_PROC_TO_UC_ACK_Q_NOT_EMPTY_IRQ] = 8,
[IPA_RX_ERR_IRQ] = 9,
[IPA_DEAGGR_ERR_IRQ] = 10,
[IPA_TX_ERR_IRQ] = 11,
[IPA_STEP_MODE_IRQ] = 12,
[IPA_PROC_ERR_IRQ] = 13,
[IPA_TX_SUSPEND_IRQ] = 14,
[IPA_TX_HOLB_DROP_IRQ] = 15,
[IPA_BAM_GSI_IDLE_IRQ] = 16,
};
static void ipa3_interrupt_defer(struct work_struct *work);
static DECLARE_WORK(ipa3_interrupt_defer_work, ipa3_interrupt_defer);
static void ipa3_deferred_interrupt_work(struct work_struct *work)
{
struct ipa3_interrupt_work_wrap *work_data =
container_of(work,
struct ipa3_interrupt_work_wrap,
interrupt_work);
IPADBG("call handler from workq...\n");
work_data->handler(work_data->interrupt, work_data->private_data,
work_data->interrupt_data);
kfree(work_data->interrupt_data);
kfree(work_data);
}
static bool ipa3_is_valid_ep(u32 ep_suspend_data)
{
u32 bmsk = 1;
u32 i = 0;
for (i = 0; i < ipa3_ctx->ipa_num_pipes; i++) {
if ((ep_suspend_data & bmsk) && (ipa3_ctx->ep[i].valid))
return true;
bmsk = bmsk << 1;
}
return false;
}
static int ipa3_handle_interrupt(int irq_num, bool isr_context)
{
struct ipa3_interrupt_info interrupt_info;
struct ipa3_interrupt_work_wrap *work_data;
u32 suspend_data;
void *interrupt_data = NULL;
struct ipa_tx_suspend_irq_data *suspend_interrupt_data = NULL;
int res;
interrupt_info = ipa_interrupt_to_cb[irq_num];
if (interrupt_info.handler == NULL) {
IPAERR("A callback function wasn't set for interrupt num %d\n",
irq_num);
return -EINVAL;
}
switch (interrupt_info.interrupt) {
case IPA_TX_SUSPEND_IRQ:
IPADBG_LOW("processing TX_SUSPEND interrupt work-around\n");
ipa3_tx_suspend_interrupt_wa();
suspend_data = ipahal_read_reg_n(IPA_IRQ_SUSPEND_INFO_EE_n,
ipa_ee);
IPADBG_LOW("get interrupt %d\n", suspend_data);
if (ipa3_ctx->ipa_hw_type >= IPA_HW_v3_1) {
/* Clearing L2 interrupts status */
ipahal_write_reg_n(IPA_SUSPEND_IRQ_CLR_EE_n,
ipa_ee, suspend_data);
}
if (!ipa3_is_valid_ep(suspend_data))
return 0;
suspend_interrupt_data =
kzalloc(sizeof(*suspend_interrupt_data), GFP_ATOMIC);
if (!suspend_interrupt_data) {
IPAERR("failed allocating suspend_interrupt_data\n");
return -ENOMEM;
}
suspend_interrupt_data->endpoints = suspend_data;
interrupt_data = suspend_interrupt_data;
break;
case IPA_UC_IRQ_0:
if (ipa3_ctx->apply_rg10_wa) {
/*
* Early detect of uC crash. If RG10 workaround is
* enable uC crash will not be detected as before
* processing uC event the interrupt is cleared using
* uC register write which times out as it crashed
* already.
*/
if (ipa3_ctx->uc_ctx.uc_sram_mmio->eventOp ==
IPA_HW_2_CPU_EVENT_ERROR)
ipa3_ctx->uc_ctx.uc_failed = true;
}
break;
default:
break;
}
/* Force defer processing if in ISR context. */
if (interrupt_info.deferred_flag || isr_context) {
work_data = kzalloc(sizeof(struct ipa3_interrupt_work_wrap),
GFP_ATOMIC);
if (!work_data) {
IPAERR("failed allocating ipa3_interrupt_work_wrap\n");
res = -ENOMEM;
goto fail_alloc_work;
}
INIT_WORK(&work_data->interrupt_work,
ipa3_deferred_interrupt_work);
work_data->handler = interrupt_info.handler;
work_data->interrupt = interrupt_info.interrupt;
work_data->private_data = interrupt_info.private_data;
work_data->interrupt_data = interrupt_data;
queue_work(ipa_interrupt_wq, &work_data->interrupt_work);
} else {
interrupt_info.handler(interrupt_info.interrupt,
interrupt_info.private_data,
interrupt_data);
kfree(interrupt_data);
}
return 0;
fail_alloc_work:
kfree(interrupt_data);
return res;
}
static void ipa3_enable_tx_suspend_wa(struct work_struct *work)
{
u32 en;
u32 suspend_bmask;
int irq_num;
IPADBG_LOW("Enter\n");
irq_num = ipa3_irq_mapping[IPA_TX_SUSPEND_IRQ];
BUG_ON(irq_num == -1);
/* make sure ipa hw is clocked on*/
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
en = ipahal_read_reg_n(IPA_IRQ_EN_EE_n, ipa_ee);
suspend_bmask = 1 << irq_num;
/*enable TX_SUSPEND_IRQ*/
en |= suspend_bmask;
IPADBG("enable TX_SUSPEND_IRQ, IPA_IRQ_EN_EE reg, write val = %u\n"
, en);
ipa3_uc_rg10_write_reg(IPA_IRQ_EN_EE_n, ipa_ee, en);
ipa3_process_interrupts(false);
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
IPADBG_LOW("Exit\n");
}
static void ipa3_tx_suspend_interrupt_wa(void)
{
u32 val;
u32 suspend_bmask;
int irq_num;
IPADBG_LOW("Enter\n");
irq_num = ipa3_irq_mapping[IPA_TX_SUSPEND_IRQ];
BUG_ON(irq_num == -1);
/*disable TX_SUSPEND_IRQ*/
val = ipahal_read_reg_n(IPA_IRQ_EN_EE_n, ipa_ee);
suspend_bmask = 1 << irq_num;
val &= ~suspend_bmask;
IPADBG("Disabling TX_SUSPEND_IRQ, write val: %u to IPA_IRQ_EN_EE reg\n",
val);
ipa3_uc_rg10_write_reg(IPA_IRQ_EN_EE_n, ipa_ee, val);
IPADBG_LOW(" processing suspend interrupt work-around, delayed work\n");
queue_delayed_work(ipa_interrupt_wq, &dwork_en_suspend_int,
msecs_to_jiffies(DIS_SUSPEND_INTERRUPT_TIMEOUT));
IPADBG_LOW("Exit\n");
}
static inline bool is_uc_irq(int irq_num)
{
if (ipa_interrupt_to_cb[irq_num].interrupt >= IPA_UC_IRQ_0 &&
ipa_interrupt_to_cb[irq_num].interrupt <= IPA_UC_IRQ_3)
return true;
else
return false;
}
static void ipa3_process_interrupts(bool isr_context)
{
u32 reg;
u32 bmsk;
u32 i = 0;
u32 en;
unsigned long flags;
bool uc_irq;
IPADBG_LOW("Enter\n");
spin_lock_irqsave(&suspend_wa_lock, flags);
en = ipahal_read_reg_n(IPA_IRQ_EN_EE_n, ipa_ee);
reg = ipahal_read_reg_n(IPA_IRQ_STTS_EE_n, ipa_ee);
while (en & reg) {
bmsk = 1;
for (i = 0; i < IPA_IRQ_NUM_MAX; i++) {
if (en & reg & bmsk) {
uc_irq = is_uc_irq(i);
/*
* Clear uC interrupt before processing to avoid
* clearing unhandled interrupts
*/
if (uc_irq)
ipa3_uc_rg10_write_reg(IPA_IRQ_CLR_EE_n,
ipa_ee, bmsk);
/*
* handle the interrupt with spin_lock
* unlocked to avoid calling client in atomic
* context. mutual exclusion still preserved
* as the read/clr is done with spin_lock
* locked.
*/
spin_unlock_irqrestore(&suspend_wa_lock, flags);
ipa3_handle_interrupt(i, isr_context);
spin_lock_irqsave(&suspend_wa_lock, flags);
/*
* Clear non uC interrupt after processing
* to avoid clearing interrupt data
*/
if (!uc_irq)
ipa3_uc_rg10_write_reg(IPA_IRQ_CLR_EE_n,
ipa_ee, bmsk);
}
bmsk = bmsk << 1;
}
/*
* In case uC failed interrupt cannot be cleared.
* Device will crash as part of handling uC event handler.
*/
if (ipa3_ctx->apply_rg10_wa && ipa3_ctx->uc_ctx.uc_failed)
break;
reg = ipahal_read_reg_n(IPA_IRQ_STTS_EE_n, ipa_ee);
/* since the suspend interrupt HW bug we must
* read again the EN register, otherwise the while is endless
*/
en = ipahal_read_reg_n(IPA_IRQ_EN_EE_n, ipa_ee);
}
spin_unlock_irqrestore(&suspend_wa_lock, flags);
IPADBG_LOW("Exit\n");
}
static void ipa3_interrupt_defer(struct work_struct *work)
{
IPADBG("processing interrupts in wq\n");
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
ipa3_process_interrupts(false);
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
IPADBG("Done\n");
}
static irqreturn_t ipa3_isr(int irq, void *ctxt)
{
unsigned long flags;
IPADBG_LOW("Enter\n");
/* defer interrupt handling in case IPA is not clocked on */
if (ipa3_active_clients_trylock(&flags) == 0) {
IPADBG("defer interrupt processing\n");
queue_work(ipa3_ctx->power_mgmt_wq, &ipa3_interrupt_defer_work);
return IRQ_HANDLED;
}
if (ipa3_ctx->ipa3_active_clients.cnt == 0) {
IPADBG("defer interrupt processing\n");
queue_work(ipa3_ctx->power_mgmt_wq, &ipa3_interrupt_defer_work);
goto bail;
}
ipa3_process_interrupts(true);
IPADBG_LOW("Exit\n");
bail:
ipa3_active_clients_trylock_unlock(&flags);
return IRQ_HANDLED;
}
/**
* ipa3_add_interrupt_handler() - Adds handler to an interrupt type
* @interrupt: Interrupt type
* @handler: The handler to be added
* @deferred_flag: whether the handler processing should be deferred in
* a workqueue
* @private_data: the client's private data
*
* Adds handler to an interrupt type and enable the specific bit
* in IRQ_EN register, associated interrupt in IRQ_STTS register will be enabled
*/
int ipa3_add_interrupt_handler(enum ipa_irq_type interrupt,
ipa_irq_handler_t handler,
bool deferred_flag,
void *private_data)
{
u32 val;
u32 bmsk;
int irq_num;
int client_idx, ep_idx;
IPADBG("in ipa3_add_interrupt_handler interrupt_enum(%d)\n", interrupt);
if (interrupt < IPA_BAD_SNOC_ACCESS_IRQ ||
interrupt >= IPA_IRQ_MAX) {
IPAERR("invalid interrupt number %d\n", interrupt);
return -EINVAL;
}
irq_num = ipa3_irq_mapping[interrupt];
if (irq_num < 0 || irq_num >= IPA_IRQ_NUM_MAX) {
IPAERR("interrupt %d not supported\n", interrupt);
WARN_ON(1);
return -EFAULT;
}
IPADBG("ipa_interrupt_to_cb irq_num(%d)\n", irq_num);
ipa_interrupt_to_cb[irq_num].deferred_flag = deferred_flag;
ipa_interrupt_to_cb[irq_num].handler = handler;
ipa_interrupt_to_cb[irq_num].private_data = private_data;
ipa_interrupt_to_cb[irq_num].interrupt = interrupt;
val = ipahal_read_reg_n(IPA_IRQ_EN_EE_n, ipa_ee);
IPADBG("read IPA_IRQ_EN_EE_n register. reg = %d\n", val);
bmsk = 1 << irq_num;
val |= bmsk;
ipa3_uc_rg10_write_reg(IPA_IRQ_EN_EE_n, ipa_ee, val);
IPADBG("wrote IPA_IRQ_EN_EE_n register. reg = %d\n", val);
/* register SUSPEND_IRQ_EN_EE_n_ADDR for L2 interrupt*/
if ((interrupt == IPA_TX_SUSPEND_IRQ) &&
(ipa3_ctx->ipa_hw_type >= IPA_HW_v3_1)) {
val = ~0;
for (client_idx = 0; client_idx < IPA_CLIENT_MAX; client_idx++)
if (IPA_CLIENT_IS_Q6_CONS(client_idx) ||
IPA_CLIENT_IS_Q6_PROD(client_idx)) {
ep_idx = ipa3_get_ep_mapping(client_idx);
IPADBG("modem ep_idx(%d) client_idx = %d\n",
ep_idx, client_idx);
if (ep_idx == -1)
IPADBG("Invalid IPA client\n");
else
val &= ~(1 << ep_idx);
}
ipahal_write_reg_n(IPA_SUSPEND_IRQ_EN_EE_n, ipa_ee, val);
IPADBG("wrote IPA_SUSPEND_IRQ_EN_EE_n reg = %d\n", val);
}
return 0;
}
/**
* ipa3_remove_interrupt_handler() - Removes handler to an interrupt type
* @interrupt: Interrupt type
*
* Removes the handler and disable the specific bit in IRQ_EN register
*/
int ipa3_remove_interrupt_handler(enum ipa_irq_type interrupt)
{
u32 val;
u32 bmsk;
int irq_num;
if (interrupt < IPA_BAD_SNOC_ACCESS_IRQ ||
interrupt >= IPA_IRQ_MAX) {
IPAERR("invalid interrupt number %d\n", interrupt);
return -EINVAL;
}
irq_num = ipa3_irq_mapping[interrupt];
if (irq_num < 0 || irq_num >= IPA_IRQ_NUM_MAX) {
IPAERR("interrupt %d not supported\n", interrupt);
WARN_ON(1);
return -EFAULT;
}
kfree(ipa_interrupt_to_cb[irq_num].private_data);
ipa_interrupt_to_cb[irq_num].deferred_flag = false;
ipa_interrupt_to_cb[irq_num].handler = NULL;
ipa_interrupt_to_cb[irq_num].private_data = NULL;
ipa_interrupt_to_cb[irq_num].interrupt = -1;
/* clean SUSPEND_IRQ_EN_EE_n_ADDR for L2 interrupt */
if ((interrupt == IPA_TX_SUSPEND_IRQ) &&
(ipa3_ctx->ipa_hw_type >= IPA_HW_v3_1)) {
ipahal_write_reg_n(IPA_SUSPEND_IRQ_EN_EE_n, ipa_ee, 0);
IPADBG("wrote IPA_SUSPEND_IRQ_EN_EE_n reg = %d\n", 0);
}
val = ipahal_read_reg_n(IPA_IRQ_EN_EE_n, ipa_ee);
bmsk = 1 << irq_num;
val &= ~bmsk;
ipa3_uc_rg10_write_reg(IPA_IRQ_EN_EE_n, ipa_ee, val);
return 0;
}
/**
* ipa3_interrupts_init() - Initialize the IPA interrupts framework
* @ipa_irq: The interrupt number to allocate
* @ee: Execution environment
* @ipa_dev: The basic device structure representing the IPA driver
*
* - Initialize the ipa_interrupt_to_cb array
* - Clear interrupts status
* - Register the ipa interrupt handler - ipa3_isr
* - Enable apps processor wakeup by IPA interrupts
*/
int ipa3_interrupts_init(u32 ipa_irq, u32 ee, struct device *ipa_dev)
{
int idx;
int res = 0;
ipa_ee = ee;
for (idx = 0; idx < IPA_IRQ_NUM_MAX; idx++) {
ipa_interrupt_to_cb[idx].deferred_flag = false;
ipa_interrupt_to_cb[idx].handler = NULL;
ipa_interrupt_to_cb[idx].private_data = NULL;
ipa_interrupt_to_cb[idx].interrupt = -1;
}
ipa_interrupt_wq = create_singlethread_workqueue(
INTERRUPT_WORKQUEUE_NAME);
if (!ipa_interrupt_wq) {
IPAERR("workqueue creation failed\n");
return -ENOMEM;
}
res = request_irq(ipa_irq, (irq_handler_t) ipa3_isr,
IRQF_TRIGGER_RISING, "ipa", ipa_dev);
if (res) {
IPAERR("fail to register IPA IRQ handler irq=%d\n", ipa_irq);
return -ENODEV;
}
IPADBG("IPA IRQ handler irq=%d registered\n", ipa_irq);
res = enable_irq_wake(ipa_irq);
if (res)
IPAERR("fail to enable IPA IRQ wakeup irq=%d res=%d\n",
ipa_irq, res);
else
IPADBG("IPA IRQ wakeup enabled irq=%d\n", ipa_irq);
spin_lock_init(&suspend_wa_lock);
return 0;
}
/**
* ipa3_suspend_active_aggr_wa() - Emulate suspend IRQ
* @clnt_hndl: suspended client handle, IRQ is emulated for this pipe
*
* Emulate suspend IRQ to unsuspend client which was suspended with an open
* aggregation frame in order to bypass HW bug of IRQ not generated when
* endpoint is suspended during an open aggregation.
*/
void ipa3_suspend_active_aggr_wa(u32 clnt_hdl)
{
struct ipa3_interrupt_info interrupt_info;
struct ipa3_interrupt_work_wrap *work_data;
struct ipa_tx_suspend_irq_data *suspend_interrupt_data;
int irq_num;
int aggr_active_bitmap = ipahal_read_reg(IPA_STATE_AGGR_ACTIVE);
if (aggr_active_bitmap & (1 << clnt_hdl)) {
/* force close aggregation */
ipahal_write_reg(IPA_AGGR_FORCE_CLOSE, (1 << clnt_hdl));
/* simulate suspend IRQ */
irq_num = ipa3_irq_mapping[IPA_TX_SUSPEND_IRQ];
interrupt_info = ipa_interrupt_to_cb[irq_num];
if (interrupt_info.handler == NULL) {
IPAERR("no CB function for IPA_TX_SUSPEND_IRQ!\n");
return;
}
suspend_interrupt_data = kzalloc(
sizeof(*suspend_interrupt_data),
GFP_ATOMIC);
if (!suspend_interrupt_data) {
IPAERR("failed allocating suspend_interrupt_data\n");
return;
}
suspend_interrupt_data->endpoints = 1 << clnt_hdl;
work_data = kzalloc(sizeof(struct ipa3_interrupt_work_wrap),
GFP_ATOMIC);
if (!work_data) {
IPAERR("failed allocating ipa3_interrupt_work_wrap\n");
goto fail_alloc_work;
}
INIT_WORK(&work_data->interrupt_work,
ipa3_deferred_interrupt_work);
work_data->handler = interrupt_info.handler;
work_data->interrupt = IPA_TX_SUSPEND_IRQ;
work_data->private_data = interrupt_info.private_data;
work_data->interrupt_data = (void *)suspend_interrupt_data;
queue_work(ipa_interrupt_wq, &work_data->interrupt_work);
return;
fail_alloc_work:
kfree(suspend_interrupt_data);
}
}

View File

@@ -0,0 +1,615 @@
/* Copyright (c) 2013-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/fs.h>
#include <linux/sched.h>
#include "ipa_i.h"
struct ipa3_intf {
char name[IPA_RESOURCE_NAME_MAX];
struct list_head link;
u32 num_tx_props;
u32 num_rx_props;
u32 num_ext_props;
struct ipa_ioc_tx_intf_prop *tx;
struct ipa_ioc_rx_intf_prop *rx;
struct ipa_ioc_ext_intf_prop *ext;
enum ipa_client_type excp_pipe;
};
struct ipa3_push_msg {
struct ipa_msg_meta meta;
ipa_msg_free_fn callback;
void *buff;
struct list_head link;
};
struct ipa3_pull_msg {
struct ipa_msg_meta meta;
ipa_msg_pull_fn callback;
struct list_head link;
};
/**
* ipa3_register_intf() - register "logical" interface
* @name: [in] interface name
* @tx: [in] TX properties of the interface
* @rx: [in] RX properties of the interface
*
* Register an interface and its tx and rx properties, this allows
* configuration of rules from user-space
*
* Returns: 0 on success, negative on failure
*
* Note: Should not be called from atomic context
*/
int ipa3_register_intf(const char *name, const struct ipa_tx_intf *tx,
const struct ipa_rx_intf *rx)
{
return ipa3_register_intf_ext(name, tx, rx, NULL);
}
/**
* ipa3_register_intf_ext() - register "logical" interface which has only
* extended properties
* @name: [in] interface name
* @tx: [in] TX properties of the interface
* @rx: [in] RX properties of the interface
* @ext: [in] EXT properties of the interface
*
* Register an interface and its tx, rx and ext properties, this allows
* configuration of rules from user-space
*
* Returns: 0 on success, negative on failure
*
* Note: Should not be called from atomic context
*/
int ipa3_register_intf_ext(const char *name, const struct ipa_tx_intf *tx,
const struct ipa_rx_intf *rx,
const struct ipa_ext_intf *ext)
{
struct ipa3_intf *intf;
u32 len;
if (name == NULL || (tx == NULL && rx == NULL && ext == NULL)) {
IPAERR("invalid params name=%p tx=%p rx=%p ext=%p\n", name,
tx, rx, ext);
return -EINVAL;
}
if (tx && tx->num_props > IPA_NUM_PROPS_MAX) {
IPAERR("invalid tx num_props=%d max=%d\n", tx->num_props,
IPA_NUM_PROPS_MAX);
return -EINVAL;
}
if (rx && rx->num_props > IPA_NUM_PROPS_MAX) {
IPAERR("invalid rx num_props=%d max=%d\n", rx->num_props,
IPA_NUM_PROPS_MAX);
return -EINVAL;
}
if (ext && ext->num_props > IPA_NUM_PROPS_MAX) {
IPAERR("invalid ext num_props=%d max=%d\n", ext->num_props,
IPA_NUM_PROPS_MAX);
return -EINVAL;
}
len = sizeof(struct ipa3_intf);
intf = kzalloc(len, GFP_KERNEL);
if (intf == NULL) {
IPAERR("fail to alloc 0x%x bytes\n", len);
return -ENOMEM;
}
strlcpy(intf->name, name, IPA_RESOURCE_NAME_MAX);
if (tx) {
intf->num_tx_props = tx->num_props;
len = tx->num_props * sizeof(struct ipa_ioc_tx_intf_prop);
intf->tx = kzalloc(len, GFP_KERNEL);
if (intf->tx == NULL) {
IPAERR("fail to alloc 0x%x bytes\n", len);
kfree(intf);
return -ENOMEM;
}
memcpy(intf->tx, tx->prop, len);
}
if (rx) {
intf->num_rx_props = rx->num_props;
len = rx->num_props * sizeof(struct ipa_ioc_rx_intf_prop);
intf->rx = kzalloc(len, GFP_KERNEL);
if (intf->rx == NULL) {
IPAERR("fail to alloc 0x%x bytes\n", len);
kfree(intf->tx);
kfree(intf);
return -ENOMEM;
}
memcpy(intf->rx, rx->prop, len);
}
if (ext) {
intf->num_ext_props = ext->num_props;
len = ext->num_props * sizeof(struct ipa_ioc_ext_intf_prop);
intf->ext = kzalloc(len, GFP_KERNEL);
if (intf->ext == NULL) {
IPAERR("fail to alloc 0x%x bytes\n", len);
kfree(intf->rx);
kfree(intf->tx);
kfree(intf);
return -ENOMEM;
}
memcpy(intf->ext, ext->prop, len);
}
if (ext && ext->excp_pipe_valid)
intf->excp_pipe = ext->excp_pipe;
else
intf->excp_pipe = IPA_CLIENT_APPS_LAN_CONS;
mutex_lock(&ipa3_ctx->lock);
list_add_tail(&intf->link, &ipa3_ctx->intf_list);
mutex_unlock(&ipa3_ctx->lock);
return 0;
}
/**
* ipa3_deregister_intf() - de-register previously registered logical interface
* @name: [in] interface name
*
* De-register a previously registered interface
*
* Returns: 0 on success, negative on failure
*
* Note: Should not be called from atomic context
*/
int ipa3_deregister_intf(const char *name)
{
struct ipa3_intf *entry;
struct ipa3_intf *next;
int result = -EINVAL;
if ((name == NULL) ||
(strnlen(name, IPA_RESOURCE_NAME_MAX) == IPA_RESOURCE_NAME_MAX)) {
IPAERR("invalid param name=%s\n", name);
return result;
}
mutex_lock(&ipa3_ctx->lock);
list_for_each_entry_safe(entry, next, &ipa3_ctx->intf_list, link) {
if (!strcmp(entry->name, name)) {
list_del(&entry->link);
kfree(entry->ext);
kfree(entry->rx);
kfree(entry->tx);
kfree(entry);
result = 0;
break;
}
}
mutex_unlock(&ipa3_ctx->lock);
return result;
}
/**
* ipa3_query_intf() - query logical interface properties
* @lookup: [inout] interface name and number of properties
*
* Obtain the handle and number of tx and rx properties for the named
* interface, used as part of querying the tx and rx properties for
* configuration of various rules from user-space
*
* Returns: 0 on success, negative on failure
*
* Note: Should not be called from atomic context
*/
int ipa3_query_intf(struct ipa_ioc_query_intf *lookup)
{
struct ipa3_intf *entry;
int result = -EINVAL;
if (lookup == NULL) {
IPAERR("invalid param lookup=%p\n", lookup);
return result;
}
if (strnlen(lookup->name, IPA_RESOURCE_NAME_MAX) ==
IPA_RESOURCE_NAME_MAX) {
IPAERR("Interface name too long. (%s)\n", lookup->name);
return result;
}
mutex_lock(&ipa3_ctx->lock);
list_for_each_entry(entry, &ipa3_ctx->intf_list, link) {
if (!strcmp(entry->name, lookup->name)) {
lookup->num_tx_props = entry->num_tx_props;
lookup->num_rx_props = entry->num_rx_props;
lookup->num_ext_props = entry->num_ext_props;
lookup->excp_pipe = entry->excp_pipe;
result = 0;
break;
}
}
mutex_unlock(&ipa3_ctx->lock);
return result;
}
/**
* ipa3_query_intf_tx_props() - qeury TX props of an interface
* @tx: [inout] interface tx attributes
*
* Obtain the tx properties for the specified interface
*
* Returns: 0 on success, negative on failure
*
* Note: Should not be called from atomic context
*/
int ipa3_query_intf_tx_props(struct ipa_ioc_query_intf_tx_props *tx)
{
struct ipa3_intf *entry;
int result = -EINVAL;
if (tx == NULL) {
IPAERR("invalid param tx=%p\n", tx);
return result;
}
if (strnlen(tx->name, IPA_RESOURCE_NAME_MAX) == IPA_RESOURCE_NAME_MAX) {
IPAERR("Interface name too long. (%s)\n", tx->name);
return result;
}
mutex_lock(&ipa3_ctx->lock);
list_for_each_entry(entry, &ipa3_ctx->intf_list, link) {
if (!strcmp(entry->name, tx->name)) {
memcpy(tx->tx, entry->tx, entry->num_tx_props *
sizeof(struct ipa_ioc_tx_intf_prop));
result = 0;
break;
}
}
mutex_unlock(&ipa3_ctx->lock);
return result;
}
/**
* ipa3_query_intf_rx_props() - qeury RX props of an interface
* @rx: [inout] interface rx attributes
*
* Obtain the rx properties for the specified interface
*
* Returns: 0 on success, negative on failure
*
* Note: Should not be called from atomic context
*/
int ipa3_query_intf_rx_props(struct ipa_ioc_query_intf_rx_props *rx)
{
struct ipa3_intf *entry;
int result = -EINVAL;
if (rx == NULL) {
IPAERR("invalid param rx=%p\n", rx);
return result;
}
if (strnlen(rx->name, IPA_RESOURCE_NAME_MAX) == IPA_RESOURCE_NAME_MAX) {
IPAERR("Interface name too long. (%s)\n", rx->name);
return result;
}
mutex_lock(&ipa3_ctx->lock);
list_for_each_entry(entry, &ipa3_ctx->intf_list, link) {
if (!strcmp(entry->name, rx->name)) {
memcpy(rx->rx, entry->rx, entry->num_rx_props *
sizeof(struct ipa_ioc_rx_intf_prop));
result = 0;
break;
}
}
mutex_unlock(&ipa3_ctx->lock);
return result;
}
/**
* ipa3_query_intf_ext_props() - qeury EXT props of an interface
* @ext: [inout] interface ext attributes
*
* Obtain the ext properties for the specified interface
*
* Returns: 0 on success, negative on failure
*
* Note: Should not be called from atomic context
*/
int ipa3_query_intf_ext_props(struct ipa_ioc_query_intf_ext_props *ext)
{
struct ipa3_intf *entry;
int result = -EINVAL;
if (ext == NULL) {
IPAERR("invalid param ext=%p\n", ext);
return result;
}
mutex_lock(&ipa3_ctx->lock);
list_for_each_entry(entry, &ipa3_ctx->intf_list, link) {
if (!strcmp(entry->name, ext->name)) {
memcpy(ext->ext, entry->ext, entry->num_ext_props *
sizeof(struct ipa_ioc_ext_intf_prop));
result = 0;
break;
}
}
mutex_unlock(&ipa3_ctx->lock);
return result;
}
/**
* ipa3_send_msg() - Send "message" from kernel client to IPA driver
* @meta: [in] message meta-data
* @buff: [in] the payload for message
* @callback: [in] free callback
*
* Client supplies the message meta-data and payload which IPA driver buffers
* till read by user-space. After read from user space IPA driver invokes the
* callback supplied to free the message payload. Client must not touch/free
* the message payload after calling this API.
*
* Returns: 0 on success, negative on failure
*
* Note: Should not be called from atomic context
*/
int ipa3_send_msg(struct ipa_msg_meta *meta, void *buff,
ipa_msg_free_fn callback)
{
struct ipa3_push_msg *msg;
if (meta == NULL || (buff == NULL && callback != NULL) ||
(buff != NULL && callback == NULL)) {
IPAERR("invalid param meta=%p buff=%p, callback=%p\n",
meta, buff, callback);
return -EINVAL;
}
if (meta->msg_type >= IPA_EVENT_MAX_NUM) {
IPAERR("unsupported message type %d\n", meta->msg_type);
return -EINVAL;
}
msg = kzalloc(sizeof(struct ipa3_push_msg), GFP_KERNEL);
if (msg == NULL) {
IPAERR("fail to alloc ipa_msg container\n");
return -ENOMEM;
}
msg->meta = *meta;
msg->buff = buff;
msg->callback = callback;
mutex_lock(&ipa3_ctx->msg_lock);
list_add_tail(&msg->link, &ipa3_ctx->msg_list);
mutex_unlock(&ipa3_ctx->msg_lock);
IPA_STATS_INC_CNT(ipa3_ctx->stats.msg_w[meta->msg_type]);
wake_up(&ipa3_ctx->msg_waitq);
return 0;
}
/**
* ipa3_register_pull_msg() - register pull message type
* @meta: [in] message meta-data
* @callback: [in] pull callback
*
* Register message callback by kernel client with IPA driver for IPA driver to
* pull message on-demand.
*
* Returns: 0 on success, negative on failure
*
* Note: Should not be called from atomic context
*/
int ipa3_register_pull_msg(struct ipa_msg_meta *meta, ipa_msg_pull_fn callback)
{
struct ipa3_pull_msg *msg;
if (meta == NULL || callback == NULL) {
IPAERR("invalid param meta=%p callback=%p\n", meta, callback);
return -EINVAL;
}
msg = kzalloc(sizeof(struct ipa3_pull_msg), GFP_KERNEL);
if (msg == NULL) {
IPAERR("fail to alloc ipa_msg container\n");
return -ENOMEM;
}
msg->meta = *meta;
msg->callback = callback;
mutex_lock(&ipa3_ctx->msg_lock);
list_add_tail(&msg->link, &ipa3_ctx->pull_msg_list);
mutex_unlock(&ipa3_ctx->msg_lock);
return 0;
}
/**
* ipa3_deregister_pull_msg() - De-register pull message type
* @meta: [in] message meta-data
*
* De-register "message" by kernel client from IPA driver
*
* Returns: 0 on success, negative on failure
*
* Note: Should not be called from atomic context
*/
int ipa3_deregister_pull_msg(struct ipa_msg_meta *meta)
{
struct ipa3_pull_msg *entry;
struct ipa3_pull_msg *next;
int result = -EINVAL;
if (meta == NULL) {
IPAERR("invalid param name=%p\n", meta);
return result;
}
mutex_lock(&ipa3_ctx->msg_lock);
list_for_each_entry_safe(entry, next, &ipa3_ctx->pull_msg_list, link) {
if (entry->meta.msg_len == meta->msg_len &&
entry->meta.msg_type == meta->msg_type) {
list_del(&entry->link);
kfree(entry);
result = 0;
break;
}
}
mutex_unlock(&ipa3_ctx->msg_lock);
return result;
}
/**
* ipa3_read() - read message from IPA device
* @filp: [in] file pointer
* @buf: [out] buffer to read into
* @count: [in] size of above buffer
* @f_pos: [inout] file position
*
* Uer-space should continually read from /dev/ipa, read wll block when there
* are no messages to read. Upon return, user-space should read the ipa_msg_meta
* from the start of the buffer to know what type of message was read and its
* length in the remainder of the buffer. Buffer supplied must be big enough to
* hold the message meta-data and the largest defined message type
*
* Returns: how many bytes copied to buffer
*
* Note: Should not be called from atomic context
*/
ssize_t ipa3_read(struct file *filp, char __user *buf, size_t count,
loff_t *f_pos)
{
char __user *start;
struct ipa3_push_msg *msg = NULL;
int ret;
DEFINE_WAIT(wait);
int locked;
start = buf;
while (1) {
prepare_to_wait(&ipa3_ctx->msg_waitq,
&wait,
TASK_INTERRUPTIBLE);
mutex_lock(&ipa3_ctx->msg_lock);
locked = 1;
if (!list_empty(&ipa3_ctx->msg_list)) {
msg = list_first_entry(&ipa3_ctx->msg_list,
struct ipa3_push_msg, link);
list_del(&msg->link);
}
IPADBG_LOW("msg=%p\n", msg);
if (msg) {
locked = 0;
mutex_unlock(&ipa3_ctx->msg_lock);
if (copy_to_user(buf, &msg->meta,
sizeof(struct ipa_msg_meta))) {
ret = -EFAULT;
break;
}
buf += sizeof(struct ipa_msg_meta);
count -= sizeof(struct ipa_msg_meta);
if (msg->buff) {
if (copy_to_user(buf, msg->buff,
msg->meta.msg_len)) {
ret = -EFAULT;
break;
}
buf += msg->meta.msg_len;
count -= msg->meta.msg_len;
msg->callback(msg->buff, msg->meta.msg_len,
msg->meta.msg_type);
}
IPA_STATS_INC_CNT(
ipa3_ctx->stats.msg_r[msg->meta.msg_type]);
kfree(msg);
}
ret = -EAGAIN;
if (filp->f_flags & O_NONBLOCK)
break;
ret = -EINTR;
if (signal_pending(current))
break;
if (start != buf)
break;
locked = 0;
mutex_unlock(&ipa3_ctx->msg_lock);
schedule();
}
finish_wait(&ipa3_ctx->msg_waitq, &wait);
if (start != buf && ret != -EFAULT)
ret = buf - start;
if (locked)
mutex_unlock(&ipa3_ctx->msg_lock);
return ret;
}
/**
* ipa3_pull_msg() - pull the specified message from client
* @meta: [in] message meta-data
* @buf: [out] buffer to read into
* @count: [in] size of above buffer
*
* Populate the supplied buffer with the pull message which is fetched
* from client, the message must have previously been registered with
* the IPA driver
*
* Returns: how many bytes copied to buffer
*
* Note: Should not be called from atomic context
*/
int ipa3_pull_msg(struct ipa_msg_meta *meta, char *buff, size_t count)
{
struct ipa3_pull_msg *entry;
int result = -EINVAL;
if (meta == NULL || buff == NULL || !count) {
IPAERR("invalid param name=%p buff=%p count=%zu\n",
meta, buff, count);
return result;
}
mutex_lock(&ipa3_ctx->msg_lock);
list_for_each_entry(entry, &ipa3_ctx->pull_msg_list, link) {
if (entry->meta.msg_len == meta->msg_len &&
entry->meta.msg_type == meta->msg_type) {
result = entry->callback(buff, count, meta->msg_type);
break;
}
}
mutex_unlock(&ipa3_ctx->msg_lock);
return result;
}

View File

@@ -0,0 +1,629 @@
/* Copyright (c) 2015, 2016 The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/debugfs.h>
#include <linux/export.h>
#include <linux/delay.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/ipa.h>
#include <linux/msm_gsi.h>
#include <linux/ipa_mhi.h>
#include "../ipa_common_i.h"
#include "ipa_i.h"
#include "ipa_qmi_service.h"
#define IPA_MHI_DRV_NAME "ipa_mhi"
#define IPA_MHI_DBG(fmt, args...) \
do { \
pr_debug(IPA_MHI_DRV_NAME " %s:%d " fmt, \
__func__, __LINE__, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf(), \
IPA_MHI_DRV_NAME " %s:%d " fmt, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
IPA_MHI_DRV_NAME " %s:%d " fmt, ## args); \
} while (0)
#define IPA_MHI_DBG_LOW(fmt, args...) \
do { \
pr_debug(IPA_MHI_DRV_NAME " %s:%d " fmt, \
__func__, __LINE__, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
IPA_MHI_DRV_NAME " %s:%d " fmt, ## args); \
} while (0)
#define IPA_MHI_ERR(fmt, args...) \
do { \
pr_err(IPA_MHI_DRV_NAME " %s:%d " fmt, \
__func__, __LINE__, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf(), \
IPA_MHI_DRV_NAME " %s:%d " fmt, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
IPA_MHI_DRV_NAME " %s:%d " fmt, ## args); \
} while (0)
#define IPA_MHI_FUNC_ENTRY() \
IPA_MHI_DBG_LOW("ENTRY\n")
#define IPA_MHI_FUNC_EXIT() \
IPA_MHI_DBG_LOW("EXIT\n")
#define IPA_MHI_MAX_UL_CHANNELS 1
#define IPA_MHI_MAX_DL_CHANNELS 1
/* bit #40 in address should be asserted for MHI transfers over pcie */
#define IPA_MHI_HOST_ADDR_COND(addr) \
((params->assert_bit40)?(IPA_MHI_HOST_ADDR(addr)):(addr))
enum ipa3_mhi_polling_mode {
IPA_MHI_POLLING_MODE_DB_MODE,
IPA_MHI_POLLING_MODE_POLL_MODE,
};
bool ipa3_mhi_stop_gsi_channel(enum ipa_client_type client)
{
int res;
int ipa_ep_idx;
struct ipa3_ep_context *ep;
IPA_MHI_FUNC_ENTRY();
ipa_ep_idx = ipa3_get_ep_mapping(client);
if (ipa_ep_idx == -1) {
IPA_MHI_ERR("Invalid client.\n");
return -EINVAL;
}
ep = &ipa3_ctx->ep[ipa_ep_idx];
IPA_MHI_DBG_LOW("Stopping GSI channel %ld\n", ep->gsi_chan_hdl);
res = gsi_stop_channel(ep->gsi_chan_hdl);
if (res != 0 &&
res != -GSI_STATUS_AGAIN &&
res != -GSI_STATUS_TIMED_OUT) {
IPA_MHI_ERR("GSI stop channel failed %d\n",
res);
WARN_ON(1);
return false;
}
if (res == 0) {
IPA_MHI_DBG_LOW("GSI channel %ld STOP\n",
ep->gsi_chan_hdl);
return true;
}
return false;
}
static int ipa3_mhi_reset_gsi_channel(enum ipa_client_type client)
{
int res;
int clnt_hdl;
IPA_MHI_FUNC_ENTRY();
clnt_hdl = ipa3_get_ep_mapping(client);
if (clnt_hdl < 0)
return -EFAULT;
res = ipa3_reset_gsi_channel(clnt_hdl);
if (res) {
IPA_MHI_ERR("ipa3_reset_gsi_channel failed %d\n", res);
return -EFAULT;
}
IPA_MHI_FUNC_EXIT();
return 0;
}
int ipa3_mhi_reset_channel_internal(enum ipa_client_type client)
{
int res;
IPA_MHI_FUNC_ENTRY();
res = ipa3_mhi_reset_gsi_channel(client);
if (res) {
IPAERR("ipa3_mhi_reset_gsi_channel failed\n");
ipa_assert();
return res;
}
res = ipa3_disable_data_path(ipa3_get_ep_mapping(client));
if (res) {
IPA_MHI_ERR("ipa3_disable_data_path failed %d\n", res);
return res;
}
IPA_MHI_FUNC_EXIT();
return 0;
}
int ipa3_mhi_start_channel_internal(enum ipa_client_type client)
{
int res;
IPA_MHI_FUNC_ENTRY();
res = ipa3_enable_data_path(ipa3_get_ep_mapping(client));
if (res) {
IPA_MHI_ERR("ipa3_enable_data_path failed %d\n", res);
return res;
}
IPA_MHI_FUNC_EXIT();
return 0;
}
static int ipa3_mhi_get_ch_poll_cfg(enum ipa_client_type client,
struct ipa_mhi_ch_ctx *ch_ctx_host, int ring_size)
{
switch (ch_ctx_host->pollcfg) {
case 0:
/*set default polling configuration according to MHI spec*/
if (IPA_CLIENT_IS_PROD(client))
return 7;
else
return (ring_size/2)/8;
break;
default:
return ch_ctx_host->pollcfg;
}
}
static int ipa_mhi_start_gsi_channel(enum ipa_client_type client,
int ipa_ep_idx, struct start_gsi_channel *params)
{
int res;
struct gsi_evt_ring_props ev_props;
struct ipa_mhi_msi_info *msi;
struct gsi_chan_props ch_props;
union __packed gsi_channel_scratch ch_scratch;
struct ipa3_ep_context *ep;
struct ipa_gsi_ep_config *ep_cfg;
IPA_MHI_FUNC_ENTRY();
ep = &ipa3_ctx->ep[ipa_ep_idx];
msi = params->msi;
ep_cfg = ipa_get_gsi_ep_info(ipa_ep_idx);
if (!ep_cfg) {
IPA_MHI_ERR("Wrong parameter, ep_cfg is NULL\n");
return -EPERM;
}
/* allocate event ring only for the first time pipe is connected */
if (params->state == IPA_HW_MHI_CHANNEL_STATE_INVALID) {
memset(&ev_props, 0, sizeof(ev_props));
ev_props.intf = GSI_EVT_CHTYPE_MHI_EV;
ev_props.intr = GSI_INTR_MSI;
ev_props.re_size = GSI_EVT_RING_RE_SIZE_16B;
ev_props.ring_len = params->ev_ctx_host->rlen;
ev_props.ring_base_addr = IPA_MHI_HOST_ADDR_COND(
params->ev_ctx_host->rbase);
ev_props.int_modt = params->ev_ctx_host->intmodt *
IPA_SLEEP_CLK_RATE_KHZ;
ev_props.int_modc = params->ev_ctx_host->intmodc;
ev_props.intvec = ((msi->data & ~msi->mask) |
(params->ev_ctx_host->msivec & msi->mask));
ev_props.msi_addr = IPA_MHI_HOST_ADDR_COND(
(((u64)msi->addr_hi << 32) | msi->addr_low));
ev_props.rp_update_addr = IPA_MHI_HOST_ADDR_COND(
params->event_context_addr +
offsetof(struct ipa_mhi_ev_ctx, rp));
ev_props.exclusive = true;
ev_props.err_cb = params->ev_err_cb;
ev_props.user_data = params->channel;
ev_props.evchid_valid = true;
ev_props.evchid = params->evchid;
IPA_MHI_DBG("allocating event ring ep:%u evchid:%u\n",
ipa_ep_idx, ev_props.evchid);
res = gsi_alloc_evt_ring(&ev_props, ipa3_ctx->gsi_dev_hdl,
&ep->gsi_evt_ring_hdl);
if (res) {
IPA_MHI_ERR("gsi_alloc_evt_ring failed %d\n", res);
goto fail_alloc_evt;
return res;
}
IPA_MHI_DBG("client %d, caching event ring hdl %lu\n",
client,
ep->gsi_evt_ring_hdl);
*params->cached_gsi_evt_ring_hdl =
ep->gsi_evt_ring_hdl;
} else {
IPA_MHI_DBG("event ring already exists: evt_ring_hdl=%lu\n",
*params->cached_gsi_evt_ring_hdl);
ep->gsi_evt_ring_hdl = *params->cached_gsi_evt_ring_hdl;
}
memset(&ch_props, 0, sizeof(ch_props));
ch_props.prot = GSI_CHAN_PROT_MHI;
ch_props.dir = IPA_CLIENT_IS_PROD(client) ?
GSI_CHAN_DIR_TO_GSI : GSI_CHAN_DIR_FROM_GSI;
ch_props.ch_id = ep_cfg->ipa_gsi_chan_num;
ch_props.evt_ring_hdl = *params->cached_gsi_evt_ring_hdl;
ch_props.re_size = GSI_CHAN_RE_SIZE_16B;
ch_props.ring_len = params->ch_ctx_host->rlen;
ch_props.ring_base_addr = IPA_MHI_HOST_ADDR_COND(
params->ch_ctx_host->rbase);
ch_props.use_db_eng = GSI_CHAN_DB_MODE;
ch_props.max_prefetch = GSI_ONE_PREFETCH_SEG;
ch_props.low_weight = 1;
ch_props.err_cb = params->ch_err_cb;
ch_props.chan_user_data = params->channel;
res = gsi_alloc_channel(&ch_props, ipa3_ctx->gsi_dev_hdl,
&ep->gsi_chan_hdl);
if (res) {
IPA_MHI_ERR("gsi_alloc_channel failed %d\n",
res);
goto fail_alloc_ch;
}
memset(&ch_scratch, 0, sizeof(ch_scratch));
ch_scratch.mhi.mhi_host_wp_addr = IPA_MHI_HOST_ADDR_COND(
params->channel_context_addr +
offsetof(struct ipa_mhi_ch_ctx, wp));
ch_scratch.mhi.assert_bit40 = params->assert_bit40;
ch_scratch.mhi.max_outstanding_tre =
ep_cfg->ipa_if_tlv * ch_props.re_size;
ch_scratch.mhi.outstanding_threshold =
min(ep_cfg->ipa_if_tlv / 2, 8) * ch_props.re_size;
ch_scratch.mhi.oob_mod_threshold = 4;
if (params->ch_ctx_host->brstmode == IPA_MHI_BURST_MODE_DEFAULT ||
params->ch_ctx_host->brstmode == IPA_MHI_BURST_MODE_ENABLE) {
ch_scratch.mhi.burst_mode_enabled = true;
ch_scratch.mhi.polling_configuration =
ipa3_mhi_get_ch_poll_cfg(client, params->ch_ctx_host,
(ch_props.ring_len / ch_props.re_size));
ch_scratch.mhi.polling_mode = IPA_MHI_POLLING_MODE_DB_MODE;
} else {
ch_scratch.mhi.burst_mode_enabled = false;
}
res = gsi_write_channel_scratch(ep->gsi_chan_hdl,
ch_scratch);
if (res) {
IPA_MHI_ERR("gsi_write_channel_scratch failed %d\n",
res);
goto fail_ch_scratch;
}
*params->mhi = ch_scratch.mhi;
IPA_MHI_DBG("Starting channel\n");
res = gsi_start_channel(ep->gsi_chan_hdl);
if (res) {
IPA_MHI_ERR("gsi_start_channel failed %d\n", res);
goto fail_ch_start;
}
IPA_MHI_FUNC_EXIT();
return 0;
fail_ch_start:
fail_ch_scratch:
gsi_dealloc_channel(ep->gsi_chan_hdl);
fail_alloc_ch:
gsi_dealloc_evt_ring(ep->gsi_evt_ring_hdl);
ep->gsi_evt_ring_hdl = ~0;
fail_alloc_evt:
return res;
}
int ipa3_mhi_init_engine(struct ipa_mhi_init_engine *params)
{
int res;
struct gsi_device_scratch gsi_scratch;
struct ipa_gsi_ep_config *gsi_ep_info;
IPA_MHI_FUNC_ENTRY();
if (!params) {
IPA_MHI_ERR("null args\n");
return -EINVAL;
}
/* Initialize IPA MHI engine */
gsi_ep_info = ipa_get_gsi_ep_info(
ipa_get_ep_mapping(IPA_CLIENT_MHI_PROD));
if (!gsi_ep_info) {
IPAERR("MHI PROD has no ep allocated\n");
ipa_assert();
}
memset(&gsi_scratch, 0, sizeof(gsi_scratch));
gsi_scratch.mhi_base_chan_idx_valid = true;
gsi_scratch.mhi_base_chan_idx = gsi_ep_info->ipa_gsi_chan_num +
params->gsi.first_ch_idx;
res = gsi_write_device_scratch(ipa3_ctx->gsi_dev_hdl,
&gsi_scratch);
if (res) {
IPA_MHI_ERR("failed to write device scratch %d\n", res);
goto fail_init_engine;
}
IPA_MHI_FUNC_EXIT();
return 0;
fail_init_engine:
return res;
}
/**
* ipa3_connect_mhi_pipe() - Connect pipe to IPA and start corresponding
* MHI channel
* @in: connect parameters
* @clnt_hdl: [out] client handle for this pipe
*
* This function is called by IPA MHI client driver on MHI channel start.
* This function is called after MHI engine was started.
*
* Return codes: 0 : success
* negative : error
*/
int ipa3_connect_mhi_pipe(struct ipa_mhi_connect_params_internal *in,
u32 *clnt_hdl)
{
struct ipa3_ep_context *ep;
int ipa_ep_idx;
int res;
enum ipa_client_type client;
IPA_MHI_FUNC_ENTRY();
if (!in || !clnt_hdl) {
IPA_MHI_ERR("NULL args\n");
return -EINVAL;
}
client = in->sys->client;
ipa_ep_idx = ipa3_get_ep_mapping(client);
if (ipa_ep_idx == -1) {
IPA_MHI_ERR("Invalid client.\n");
return -EINVAL;
}
ep = &ipa3_ctx->ep[ipa_ep_idx];
if (ep->valid == 1) {
IPA_MHI_ERR("EP already allocated.\n");
return -EPERM;
}
memset(ep, 0, offsetof(struct ipa3_ep_context, sys));
ep->valid = 1;
ep->skip_ep_cfg = in->sys->skip_ep_cfg;
ep->client = client;
ep->client_notify = in->sys->notify;
ep->priv = in->sys->priv;
ep->keep_ipa_awake = in->sys->keep_ipa_awake;
res = ipa_mhi_start_gsi_channel(client,
ipa_ep_idx, &in->start.gsi);
if (res) {
IPA_MHI_ERR("ipa_mhi_start_gsi_channel failed %d\n",
res);
goto fail_start_channel;
}
res = ipa3_enable_data_path(ipa_ep_idx);
if (res) {
IPA_MHI_ERR("enable data path failed res=%d clnt=%d.\n", res,
ipa_ep_idx);
goto fail_ep_cfg;
}
if (!ep->skip_ep_cfg) {
if (ipa3_cfg_ep(ipa_ep_idx, &in->sys->ipa_ep_cfg)) {
IPAERR("fail to configure EP.\n");
goto fail_ep_cfg;
}
if (ipa3_cfg_ep_status(ipa_ep_idx, &ep->status)) {
IPAERR("fail to configure status of EP.\n");
goto fail_ep_cfg;
}
IPA_MHI_DBG("ep configuration successful\n");
} else {
IPA_MHI_DBG("skipping ep configuration\n");
}
*clnt_hdl = ipa_ep_idx;
if (!ep->skip_ep_cfg && IPA_CLIENT_IS_PROD(client))
ipa3_install_dflt_flt_rules(ipa_ep_idx);
ipa3_ctx->skip_ep_cfg_shadow[ipa_ep_idx] = ep->skip_ep_cfg;
IPA_MHI_DBG("client %d (ep: %d) connected\n", client,
ipa_ep_idx);
IPA_MHI_FUNC_EXIT();
return 0;
fail_ep_cfg:
ipa3_disable_data_path(ipa_ep_idx);
fail_start_channel:
memset(ep, 0, offsetof(struct ipa3_ep_context, sys));
return -EPERM;
}
/**
* ipa3_disconnect_mhi_pipe() - Disconnect pipe from IPA and reset corresponding
* MHI channel
* @clnt_hdl: client handle for this pipe
*
* This function is called by IPA MHI client driver on MHI channel reset.
* This function is called after MHI channel was started.
* This function is doing the following:
* - Send command to uC/GSI to reset corresponding MHI channel
* - Configure IPA EP control
*
* Return codes: 0 : success
* negative : error
*/
int ipa3_disconnect_mhi_pipe(u32 clnt_hdl)
{
struct ipa3_ep_context *ep;
int res;
IPA_MHI_FUNC_ENTRY();
if (clnt_hdl >= ipa3_ctx->ipa_num_pipes) {
IPAERR("invalid handle %d\n", clnt_hdl);
return -EINVAL;
}
if (ipa3_ctx->ep[clnt_hdl].valid == 0) {
IPAERR("pipe was not connected %d\n", clnt_hdl);
return -EINVAL;
}
ep = &ipa3_ctx->ep[clnt_hdl];
if (ipa3_ctx->transport_prototype == IPA_TRANSPORT_TYPE_GSI) {
res = gsi_dealloc_channel(ep->gsi_chan_hdl);
if (res) {
IPAERR("gsi_dealloc_channel failed %d\n", res);
goto fail_reset_channel;
}
}
ep->valid = 0;
ipa3_delete_dflt_flt_rules(clnt_hdl);
IPA_MHI_DBG("client (ep: %d) disconnected\n", clnt_hdl);
IPA_MHI_FUNC_EXIT();
return 0;
fail_reset_channel:
return res;
}
int ipa3_mhi_resume_channels_internal(enum ipa_client_type client,
bool LPTransitionRejected, bool brstmode_enabled,
union __packed gsi_channel_scratch ch_scratch, u8 index)
{
int res;
int ipa_ep_idx;
struct ipa3_ep_context *ep;
IPA_MHI_FUNC_ENTRY();
ipa_ep_idx = ipa3_get_ep_mapping(client);
ep = &ipa3_ctx->ep[ipa_ep_idx];
if (brstmode_enabled && !LPTransitionRejected) {
/*
* set polling mode bit to DB mode before
* resuming the channel
*/
res = gsi_write_channel_scratch(
ep->gsi_chan_hdl, ch_scratch);
if (res) {
IPA_MHI_ERR("write ch scratch fail %d\n"
, res);
return res;
}
}
res = gsi_start_channel(ep->gsi_chan_hdl);
if (res) {
IPA_MHI_ERR("failed to resume channel error %d\n", res);
return res;
}
IPA_MHI_FUNC_EXIT();
return 0;
}
int ipa3_mhi_query_ch_info(enum ipa_client_type client,
struct gsi_chan_info *ch_info)
{
int ipa_ep_idx;
int res;
struct ipa3_ep_context *ep;
IPA_MHI_FUNC_ENTRY();
ipa_ep_idx = ipa3_get_ep_mapping(client);
ep = &ipa3_ctx->ep[ipa_ep_idx];
res = gsi_query_channel_info(ep->gsi_chan_hdl, ch_info);
if (res) {
IPAERR("gsi_query_channel_info failed\n");
return res;
}
IPA_MHI_FUNC_EXIT();
return 0;
}
bool ipa3_has_open_aggr_frame(enum ipa_client_type client)
{
u32 aggr_state_active;
int ipa_ep_idx;
aggr_state_active = ipahal_read_reg(IPA_STATE_AGGR_ACTIVE);
IPA_MHI_DBG_LOW("IPA_STATE_AGGR_ACTIVE_OFST 0x%x\n", aggr_state_active);
ipa_ep_idx = ipa_get_ep_mapping(client);
if (ipa_ep_idx == -1) {
ipa_assert();
return false;
}
if ((1 << ipa_ep_idx) & aggr_state_active)
return true;
return false;
}
int ipa3_mhi_destroy_channel(enum ipa_client_type client)
{
int res;
int ipa_ep_idx;
struct ipa3_ep_context *ep;
ipa_ep_idx = ipa3_get_ep_mapping(client);
ep = &ipa3_ctx->ep[ipa_ep_idx];
IPA_MHI_DBG("reset event ring (hdl: %lu, ep: %d)\n",
ep->gsi_evt_ring_hdl, ipa_ep_idx);
res = gsi_reset_evt_ring(ep->gsi_evt_ring_hdl);
if (res) {
IPAERR(" failed to reset evt ring %lu, err %d\n"
, ep->gsi_evt_ring_hdl, res);
goto fail;
}
IPA_MHI_DBG("dealloc event ring (hdl: %lu, ep: %d)\n",
ep->gsi_evt_ring_hdl, ipa_ep_idx);
res = gsi_dealloc_evt_ring(
ep->gsi_evt_ring_hdl);
if (res) {
IPAERR("dealloc evt ring %lu failed, err %d\n"
, ep->gsi_evt_ring_hdl, res);
goto fail;
}
return 0;
fail:
return res;
}
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("IPA MHI driver");

View File

@@ -0,0 +1,763 @@
/* Copyright (c) 2012-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/device.h>
#include <linux/fs.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/uaccess.h>
#include "ipa_i.h"
#include "ipahal/ipahal.h"
#define IPA_NAT_PHYS_MEM_OFFSET 0
#define IPA_NAT_PHYS_MEM_SIZE IPA_RAM_NAT_SIZE
#define IPA_NAT_TEMP_MEM_SIZE 128
static int ipa3_nat_vma_fault_remap(
struct vm_area_struct *vma, struct vm_fault *vmf)
{
IPADBG("\n");
vmf->page = NULL;
return VM_FAULT_SIGBUS;
}
/* VMA related file operations functions */
static struct vm_operations_struct ipa3_nat_remap_vm_ops = {
.fault = ipa3_nat_vma_fault_remap,
};
static int ipa3_nat_open(struct inode *inode, struct file *filp)
{
struct ipa3_nat_mem *nat_ctx;
IPADBG("\n");
nat_ctx = container_of(inode->i_cdev, struct ipa3_nat_mem, cdev);
filp->private_data = nat_ctx;
IPADBG("return\n");
return 0;
}
static int ipa3_nat_mmap(struct file *filp, struct vm_area_struct *vma)
{
unsigned long vsize = vma->vm_end - vma->vm_start;
struct ipa3_nat_mem *nat_ctx =
(struct ipa3_nat_mem *)filp->private_data;
unsigned long phys_addr;
int result;
mutex_lock(&nat_ctx->lock);
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
if (nat_ctx->is_sys_mem) {
IPADBG("Mapping system memory\n");
if (nat_ctx->is_mapped) {
IPAERR("mapping already exists, only 1 supported\n");
result = -EINVAL;
goto bail;
}
IPADBG("map sz=0x%zx\n", nat_ctx->size);
result =
dma_mmap_coherent(
ipa3_ctx->pdev, vma,
nat_ctx->vaddr, nat_ctx->dma_handle,
nat_ctx->size);
if (result) {
IPAERR("unable to map memory. Err:%d\n", result);
goto bail;
}
ipa3_ctx->nat_mem.nat_base_address = nat_ctx->vaddr;
} else {
IPADBG("Mapping shared(local) memory\n");
IPADBG("map sz=0x%lx\n", vsize);
if ((IPA_NAT_PHYS_MEM_SIZE == 0) ||
(vsize > IPA_NAT_PHYS_MEM_SIZE)) {
result = -EINVAL;
goto bail;
}
phys_addr = ipa3_ctx->ipa_wrapper_base +
ipa3_ctx->ctrl->ipa_reg_base_ofst +
ipahal_get_reg_n_ofst(IPA_SRAM_DIRECT_ACCESS_n,
IPA_NAT_PHYS_MEM_OFFSET);
if (remap_pfn_range(
vma, vma->vm_start,
phys_addr >> PAGE_SHIFT, vsize, vma->vm_page_prot)) {
IPAERR("remap failed\n");
result = -EAGAIN;
goto bail;
}
ipa3_ctx->nat_mem.nat_base_address = (void *)vma->vm_start;
}
nat_ctx->is_mapped = true;
vma->vm_ops = &ipa3_nat_remap_vm_ops;
IPADBG("return\n");
result = 0;
bail:
mutex_unlock(&nat_ctx->lock);
return result;
}
static const struct file_operations ipa3_nat_fops = {
.owner = THIS_MODULE,
.open = ipa3_nat_open,
.mmap = ipa3_nat_mmap
};
/**
* ipa3_allocate_temp_nat_memory() - Allocates temp nat memory
*
* Called during nat table delete
*/
void ipa3_allocate_temp_nat_memory(void)
{
struct ipa3_nat_mem *nat_ctx = &(ipa3_ctx->nat_mem);
int gfp_flags = GFP_KERNEL | __GFP_ZERO;
nat_ctx->tmp_vaddr =
dma_alloc_coherent(ipa3_ctx->pdev, IPA_NAT_TEMP_MEM_SIZE,
&nat_ctx->tmp_dma_handle, gfp_flags);
if (nat_ctx->tmp_vaddr == NULL) {
IPAERR("Temp Memory alloc failed\n");
nat_ctx->is_tmp_mem = false;
return;
}
nat_ctx->is_tmp_mem = true;
IPADBG("IPA NAT allocated temp memory successfully\n");
}
/**
* ipa3_create_nat_device() - Create the NAT device
*
* Called during ipa init to create nat device
*
* Returns: 0 on success, negative on failure
*/
int ipa3_create_nat_device(void)
{
struct ipa3_nat_mem *nat_ctx = &(ipa3_ctx->nat_mem);
int result;
IPADBG("\n");
mutex_lock(&nat_ctx->lock);
nat_ctx->class = class_create(THIS_MODULE, NAT_DEV_NAME);
if (IS_ERR(nat_ctx->class)) {
IPAERR("unable to create the class\n");
result = -ENODEV;
goto vaddr_alloc_fail;
}
result = alloc_chrdev_region(&nat_ctx->dev_num,
0,
1,
NAT_DEV_NAME);
if (result) {
IPAERR("alloc_chrdev_region err.\n");
result = -ENODEV;
goto alloc_chrdev_region_fail;
}
nat_ctx->dev =
device_create(nat_ctx->class, NULL, nat_ctx->dev_num, nat_ctx,
"%s", NAT_DEV_NAME);
if (IS_ERR(nat_ctx->dev)) {
IPAERR("device_create err:%ld\n", PTR_ERR(nat_ctx->dev));
result = -ENODEV;
goto device_create_fail;
}
cdev_init(&nat_ctx->cdev, &ipa3_nat_fops);
nat_ctx->cdev.owner = THIS_MODULE;
nat_ctx->cdev.ops = &ipa3_nat_fops;
result = cdev_add(&nat_ctx->cdev, nat_ctx->dev_num, 1);
if (result) {
IPAERR("cdev_add err=%d\n", -result);
goto cdev_add_fail;
}
IPADBG("ipa nat dev added successful. major:%d minor:%d\n",
MAJOR(nat_ctx->dev_num),
MINOR(nat_ctx->dev_num));
nat_ctx->is_dev = true;
ipa3_allocate_temp_nat_memory();
IPADBG("IPA NAT device created successfully\n");
result = 0;
goto bail;
cdev_add_fail:
device_destroy(nat_ctx->class, nat_ctx->dev_num);
device_create_fail:
unregister_chrdev_region(nat_ctx->dev_num, 1);
alloc_chrdev_region_fail:
class_destroy(nat_ctx->class);
vaddr_alloc_fail:
if (nat_ctx->vaddr) {
IPADBG("Releasing system memory\n");
dma_free_coherent(
ipa3_ctx->pdev, nat_ctx->size,
nat_ctx->vaddr, nat_ctx->dma_handle);
nat_ctx->vaddr = NULL;
nat_ctx->dma_handle = 0;
nat_ctx->size = 0;
}
bail:
mutex_unlock(&nat_ctx->lock);
return result;
}
/**
* ipa3_allocate_nat_device() - Allocates memory for the NAT device
* @mem: [in/out] memory parameters
*
* Called by NAT client driver to allocate memory for the NAT entries. Based on
* the request size either shared or system memory will be used.
*
* Returns: 0 on success, negative on failure
*/
int ipa3_allocate_nat_device(struct ipa_ioc_nat_alloc_mem *mem)
{
struct ipa3_nat_mem *nat_ctx = &(ipa3_ctx->nat_mem);
int gfp_flags = GFP_KERNEL | __GFP_ZERO;
int result;
IPADBG("passed memory size %zu\n", mem->size);
mutex_lock(&nat_ctx->lock);
if (strcmp(mem->dev_name, NAT_DEV_NAME)) {
IPAERR("Nat device name mismatch\n");
IPAERR("Expect: %s Recv: %s\n", NAT_DEV_NAME, mem->dev_name);
result = -EPERM;
goto bail;
}
if (nat_ctx->is_dev != true) {
IPAERR("Nat device not created successfully during boot up\n");
result = -EPERM;
goto bail;
}
if (nat_ctx->is_dev_init == true) {
IPAERR("Device already init\n");
result = 0;
goto bail;
}
if (mem->size <= 0 ||
nat_ctx->is_dev_init == true) {
IPAERR("Invalid Parameters or device is already init\n");
result = -EPERM;
goto bail;
}
if (mem->size > IPA_NAT_PHYS_MEM_SIZE) {
IPADBG("Allocating system memory\n");
nat_ctx->is_sys_mem = true;
nat_ctx->vaddr =
dma_alloc_coherent(ipa3_ctx->pdev, mem->size,
&nat_ctx->dma_handle, gfp_flags);
if (nat_ctx->vaddr == NULL) {
IPAERR("memory alloc failed\n");
result = -ENOMEM;
goto bail;
}
nat_ctx->size = mem->size;
} else {
IPADBG("using shared(local) memory\n");
nat_ctx->is_sys_mem = false;
}
nat_ctx->is_dev_init = true;
IPADBG("IPA NAT dev init successfully\n");
result = 0;
bail:
mutex_unlock(&nat_ctx->lock);
return result;
}
/* IOCTL function handlers */
/**
* ipa3_nat_init_cmd() - Post IP_V4_NAT_INIT command to IPA HW
* @init: [in] initialization command attributes
*
* Called by NAT client driver to post IP_V4_NAT_INIT command to IPA HW
*
* Returns: 0 on success, negative on failure
*/
int ipa3_nat_init_cmd(struct ipa_ioc_v4_nat_init *init)
{
#define TBL_ENTRY_SIZE 32
#define INDX_TBL_ENTRY_SIZE 4
struct ipahal_imm_cmd_pyld *nop_cmd_pyld = NULL;
struct ipa3_desc desc[2];
struct ipahal_imm_cmd_ip_v4_nat_init cmd;
struct ipahal_imm_cmd_pyld *cmd_pyld = NULL;
int result;
u32 offset = 0;
size_t tmp;
IPADBG("\n");
if (init->table_entries == 0) {
IPADBG("Table entries is zero\n");
return -EPERM;
}
/* check for integer overflow */
if (init->ipv4_rules_offset >
UINT_MAX - (TBL_ENTRY_SIZE * (init->table_entries + 1))) {
IPAERR("Detected overflow\n");
return -EPERM;
}
/* Check Table Entry offset is not
* beyond allocated size
*/
tmp = init->ipv4_rules_offset +
(TBL_ENTRY_SIZE * (init->table_entries + 1));
if (tmp > ipa3_ctx->nat_mem.size) {
IPAERR("Table rules offset not valid\n");
IPAERR("offset:%d entrys:%d size:%zu mem_size:%zu\n",
init->ipv4_rules_offset, (init->table_entries + 1),
tmp, ipa3_ctx->nat_mem.size);
return -EPERM;
}
/* check for integer overflow */
if (init->expn_rules_offset >
UINT_MAX - (TBL_ENTRY_SIZE * init->expn_table_entries)) {
IPAERR("Detected overflow\n");
return -EPERM;
}
/* Check Expn Table Entry offset is not
* beyond allocated size
*/
tmp = init->expn_rules_offset +
(TBL_ENTRY_SIZE * init->expn_table_entries);
if (tmp > ipa3_ctx->nat_mem.size) {
IPAERR("Expn Table rules offset not valid\n");
IPAERR("offset:%d entrys:%d size:%zu mem_size:%zu\n",
init->expn_rules_offset, init->expn_table_entries,
tmp, ipa3_ctx->nat_mem.size);
return -EPERM;
}
/* check for integer overflow */
if (init->index_offset >
UINT_MAX - (INDX_TBL_ENTRY_SIZE * (init->table_entries + 1))) {
IPAERR("Detected overflow\n");
return -EPERM;
}
/* Check Indx Table Entry offset is not
* beyond allocated size
*/
tmp = init->index_offset +
(INDX_TBL_ENTRY_SIZE * (init->table_entries + 1));
if (tmp > ipa3_ctx->nat_mem.size) {
IPAERR("Indx Table rules offset not valid\n");
IPAERR("offset:%d entrys:%d size:%zu mem_size:%zu\n",
init->index_offset, (init->table_entries + 1),
tmp, ipa3_ctx->nat_mem.size);
return -EPERM;
}
/* check for integer overflow */
if (init->index_expn_offset >
UINT_MAX - (INDX_TBL_ENTRY_SIZE * init->expn_table_entries)) {
IPAERR("Detected overflow\n");
return -EPERM;
}
/* Check Expn Table entry offset is not
* beyond allocated size
*/
tmp = init->index_expn_offset +
(INDX_TBL_ENTRY_SIZE * init->expn_table_entries);
if (tmp > ipa3_ctx->nat_mem.size) {
IPAERR("Indx Expn Table rules offset not valid\n");
IPAERR("offset:%d entrys:%d size:%zu mem_size:%zu\n",
init->index_expn_offset, init->expn_table_entries,
tmp, ipa3_ctx->nat_mem.size);
return -EPERM;
}
memset(&desc, 0, sizeof(desc));
/* NO-OP IC for ensuring that IPA pipeline is empty */
nop_cmd_pyld =
ipahal_construct_nop_imm_cmd(false, IPAHAL_HPS_CLEAR, false);
if (!nop_cmd_pyld) {
IPAERR("failed to construct NOP imm cmd\n");
result = -ENOMEM;
goto bail;
}
desc[0].opcode = ipahal_imm_cmd_get_opcode(IPA_IMM_CMD_REGISTER_WRITE);
desc[0].type = IPA_IMM_CMD_DESC;
desc[0].callback = NULL;
desc[0].user1 = NULL;
desc[0].user2 = 0;
desc[0].pyld = nop_cmd_pyld->data;
desc[0].len = nop_cmd_pyld->len;
if (ipa3_ctx->nat_mem.vaddr) {
IPADBG("using system memory for nat table\n");
cmd.ipv4_rules_addr_shared = false;
cmd.ipv4_expansion_rules_addr_shared = false;
cmd.index_table_addr_shared = false;
cmd.index_table_expansion_addr_shared = false;
offset = UINT_MAX - ipa3_ctx->nat_mem.dma_handle;
if ((init->ipv4_rules_offset > offset) ||
(init->expn_rules_offset > offset) ||
(init->index_offset > offset) ||
(init->index_expn_offset > offset)) {
IPAERR("Failed due to integer overflow\n");
IPAERR("nat.mem.dma_handle: 0x%pa\n",
&ipa3_ctx->nat_mem.dma_handle);
IPAERR("ipv4_rules_offset: 0x%x\n",
init->ipv4_rules_offset);
IPAERR("expn_rules_offset: 0x%x\n",
init->expn_rules_offset);
IPAERR("index_offset: 0x%x\n",
init->index_offset);
IPAERR("index_expn_offset: 0x%x\n",
init->index_expn_offset);
result = -EPERM;
goto free_nop;
}
cmd.ipv4_rules_addr =
ipa3_ctx->nat_mem.dma_handle + init->ipv4_rules_offset;
IPADBG("ipv4_rules_offset:0x%x\n", init->ipv4_rules_offset);
cmd.ipv4_expansion_rules_addr =
ipa3_ctx->nat_mem.dma_handle + init->expn_rules_offset;
IPADBG("expn_rules_offset:0x%x\n", init->expn_rules_offset);
cmd.index_table_addr =
ipa3_ctx->nat_mem.dma_handle + init->index_offset;
IPADBG("index_offset:0x%x\n", init->index_offset);
cmd.index_table_expansion_addr =
ipa3_ctx->nat_mem.dma_handle + init->index_expn_offset;
IPADBG("index_expn_offset:0x%x\n", init->index_expn_offset);
} else {
IPADBG("using shared(local) memory for nat table\n");
cmd.ipv4_rules_addr_shared = true;
cmd.ipv4_expansion_rules_addr_shared = true;
cmd.index_table_addr_shared = true;
cmd.index_table_expansion_addr_shared = true;
cmd.ipv4_rules_addr = init->ipv4_rules_offset +
IPA_RAM_NAT_OFST;
cmd.ipv4_expansion_rules_addr = init->expn_rules_offset +
IPA_RAM_NAT_OFST;
cmd.index_table_addr = init->index_offset +
IPA_RAM_NAT_OFST;
cmd.index_table_expansion_addr = init->index_expn_offset +
IPA_RAM_NAT_OFST;
}
cmd.table_index = init->tbl_index;
IPADBG("Table index:0x%x\n", cmd.table_index);
cmd.size_base_tables = init->table_entries;
IPADBG("Base Table size:0x%x\n", cmd.size_base_tables);
cmd.size_expansion_tables = init->expn_table_entries;
IPADBG("Expansion Table size:0x%x\n", cmd.size_expansion_tables);
cmd.public_ip_addr = init->ip_addr;
IPADBG("Public ip address:0x%x\n", cmd.public_ip_addr);
cmd_pyld = ipahal_construct_imm_cmd(
IPA_IMM_CMD_IP_V4_NAT_INIT, &cmd, false);
if (!cmd_pyld) {
IPAERR("Fail to construct ip_v4_nat_init imm cmd\n");
result = -EPERM;
goto free_nop;
}
desc[1].opcode = ipahal_imm_cmd_get_opcode(IPA_IMM_CMD_IP_V4_NAT_INIT);
desc[1].type = IPA_IMM_CMD_DESC;
desc[1].callback = NULL;
desc[1].user1 = NULL;
desc[1].user2 = 0;
desc[1].pyld = cmd_pyld->data;
desc[1].len = cmd_pyld->len;
IPADBG("posting v4 init command\n");
if (ipa3_send_cmd(2, desc)) {
IPAERR("Fail to send immediate command\n");
result = -EPERM;
goto destroy_imm_cmd;
}
ipa3_ctx->nat_mem.public_ip_addr = init->ip_addr;
IPADBG("Table ip address:0x%x", ipa3_ctx->nat_mem.public_ip_addr);
ipa3_ctx->nat_mem.ipv4_rules_addr =
(char *)ipa3_ctx->nat_mem.nat_base_address + init->ipv4_rules_offset;
IPADBG("ipv4_rules_addr: 0x%p\n",
ipa3_ctx->nat_mem.ipv4_rules_addr);
ipa3_ctx->nat_mem.ipv4_expansion_rules_addr =
(char *)ipa3_ctx->nat_mem.nat_base_address + init->expn_rules_offset;
IPADBG("ipv4_expansion_rules_addr: 0x%p\n",
ipa3_ctx->nat_mem.ipv4_expansion_rules_addr);
ipa3_ctx->nat_mem.index_table_addr =
(char *)ipa3_ctx->nat_mem.nat_base_address +
init->index_offset;
IPADBG("index_table_addr: 0x%p\n",
ipa3_ctx->nat_mem.index_table_addr);
ipa3_ctx->nat_mem.index_table_expansion_addr =
(char *)ipa3_ctx->nat_mem.nat_base_address + init->index_expn_offset;
IPADBG("index_table_expansion_addr: 0x%p\n",
ipa3_ctx->nat_mem.index_table_expansion_addr);
IPADBG("size_base_tables: %d\n", init->table_entries);
ipa3_ctx->nat_mem.size_base_tables = init->table_entries;
IPADBG("size_expansion_tables: %d\n", init->expn_table_entries);
ipa3_ctx->nat_mem.size_expansion_tables = init->expn_table_entries;
IPADBG("return\n");
result = 0;
destroy_imm_cmd:
ipahal_destroy_imm_cmd(cmd_pyld);
free_nop:
ipahal_destroy_imm_cmd(nop_cmd_pyld);
bail:
return result;
}
/**
* ipa3_nat_dma_cmd() - Post NAT_DMA command to IPA HW
* @dma: [in] initialization command attributes
*
* Called by NAT client driver to post NAT_DMA command to IPA HW
*
* Returns: 0 on success, negative on failure
*/
int ipa3_nat_dma_cmd(struct ipa_ioc_nat_dma_cmd *dma)
{
#define NUM_OF_DESC 2
struct ipahal_imm_cmd_pyld *nop_cmd_pyld = NULL;
struct ipahal_imm_cmd_nat_dma cmd;
struct ipahal_imm_cmd_pyld *cmd_pyld = NULL;
struct ipa3_desc *desc = NULL;
u16 size = 0, cnt = 0;
int ret = 0;
IPADBG("\n");
if (dma->entries <= 0) {
IPAERR("Invalid number of commands %d\n",
dma->entries);
ret = -EPERM;
goto bail;
}
size = sizeof(struct ipa3_desc) * NUM_OF_DESC;
desc = kzalloc(size, GFP_KERNEL);
if (desc == NULL) {
IPAERR("Failed to alloc memory\n");
ret = -ENOMEM;
goto bail;
}
/* NO-OP IC for ensuring that IPA pipeline is empty */
nop_cmd_pyld =
ipahal_construct_nop_imm_cmd(false, IPAHAL_HPS_CLEAR, false);
if (!nop_cmd_pyld) {
IPAERR("Failed to construct NOP imm cmd\n");
ret = -ENOMEM;
goto bail;
}
desc[0].type = IPA_IMM_CMD_DESC;
desc[0].opcode = ipahal_imm_cmd_get_opcode(IPA_IMM_CMD_REGISTER_WRITE);
desc[0].callback = NULL;
desc[0].user1 = NULL;
desc[0].user2 = 0;
desc[0].pyld = nop_cmd_pyld->data;
desc[0].len = nop_cmd_pyld->len;
for (cnt = 0; cnt < dma->entries; cnt++) {
cmd.table_index = dma->dma[cnt].table_index;
cmd.base_addr = dma->dma[cnt].base_addr;
cmd.offset = dma->dma[cnt].offset;
cmd.data = dma->dma[cnt].data;
cmd_pyld = ipahal_construct_imm_cmd(
IPA_IMM_CMD_NAT_DMA, &cmd, false);
if (!cmd_pyld) {
IPAERR("Fail to construct nat_dma imm cmd\n");
continue;
}
desc[1].type = IPA_IMM_CMD_DESC;
desc[1].opcode = ipahal_imm_cmd_get_opcode(IPA_IMM_CMD_NAT_DMA);
desc[1].callback = NULL;
desc[1].user1 = NULL;
desc[1].user2 = 0;
desc[1].pyld = cmd_pyld->data;
desc[1].len = cmd_pyld->len;
ret = ipa3_send_cmd(NUM_OF_DESC, desc);
if (ret == -EPERM)
IPAERR("Fail to send immediate command %d\n", cnt);
ipahal_destroy_imm_cmd(cmd_pyld);
}
bail:
if (desc != NULL)
kfree(desc);
if (nop_cmd_pyld != NULL)
ipahal_destroy_imm_cmd(nop_cmd_pyld);
return ret;
}
/**
* ipa3_nat_free_mem_and_device() - free the NAT memory and remove the device
* @nat_ctx: [in] the IPA NAT memory to free
*
* Called by NAT client driver to free the NAT memory and remove the device
*/
void ipa3_nat_free_mem_and_device(struct ipa3_nat_mem *nat_ctx)
{
IPADBG("\n");
mutex_lock(&nat_ctx->lock);
if (nat_ctx->is_sys_mem) {
IPADBG("freeing the dma memory\n");
dma_free_coherent(
ipa3_ctx->pdev, nat_ctx->size,
nat_ctx->vaddr, nat_ctx->dma_handle);
nat_ctx->size = 0;
nat_ctx->vaddr = NULL;
}
nat_ctx->is_mapped = false;
nat_ctx->is_sys_mem = false;
nat_ctx->is_dev_init = false;
mutex_unlock(&nat_ctx->lock);
IPADBG("return\n");
}
/**
* ipa3_nat_del_cmd() - Delete a NAT table
* @del: [in] delete table table table parameters
*
* Called by NAT client driver to delete the nat table
*
* Returns: 0 on success, negative on failure
*/
int ipa3_nat_del_cmd(struct ipa_ioc_v4_nat_del *del)
{
struct ipahal_imm_cmd_pyld *nop_cmd_pyld = NULL;
struct ipa3_desc desc[2];
struct ipahal_imm_cmd_ip_v4_nat_init cmd;
struct ipahal_imm_cmd_pyld *cmd_pyld;
bool mem_type_shared = true;
u32 base_addr = IPA_NAT_PHYS_MEM_OFFSET;
int result;
IPADBG("\n");
if (ipa3_ctx->nat_mem.is_tmp_mem) {
IPAERR("using temp memory during nat del\n");
mem_type_shared = false;
base_addr = ipa3_ctx->nat_mem.tmp_dma_handle;
}
if (del->public_ip_addr == 0) {
IPADBG("Bad Parameter\n");
result = -EPERM;
goto bail;
}
memset(&desc, 0, sizeof(desc));
/* NO-OP IC for ensuring that IPA pipeline is empty */
nop_cmd_pyld =
ipahal_construct_nop_imm_cmd(false, IPAHAL_HPS_CLEAR, false);
if (!nop_cmd_pyld) {
IPAERR("Failed to construct NOP imm cmd\n");
result = -ENOMEM;
goto bail;
}
desc[0].opcode = ipahal_imm_cmd_get_opcode(IPA_IMM_CMD_REGISTER_WRITE);
desc[0].type = IPA_IMM_CMD_DESC;
desc[0].callback = NULL;
desc[0].user1 = NULL;
desc[0].user2 = 0;
desc[0].pyld = nop_cmd_pyld->data;
desc[0].len = nop_cmd_pyld->len;
cmd.table_index = del->table_index;
cmd.ipv4_rules_addr = base_addr;
cmd.ipv4_rules_addr_shared = mem_type_shared;
cmd.ipv4_expansion_rules_addr = base_addr;
cmd.ipv4_expansion_rules_addr_shared = mem_type_shared;
cmd.index_table_addr = base_addr;
cmd.index_table_addr_shared = mem_type_shared;
cmd.index_table_expansion_addr = base_addr;
cmd.index_table_expansion_addr_shared = mem_type_shared;
cmd.size_base_tables = 0;
cmd.size_expansion_tables = 0;
cmd.public_ip_addr = 0;
cmd_pyld = ipahal_construct_imm_cmd(
IPA_IMM_CMD_IP_V4_NAT_INIT, &cmd, false);
if (!cmd_pyld) {
IPAERR("Fail to construct ip_v4_nat_init imm cmd\n");
result = -EPERM;
goto destroy_regwrt_imm_cmd;
}
desc[1].opcode = ipahal_imm_cmd_get_opcode(IPA_IMM_CMD_IP_V4_NAT_INIT);
desc[1].type = IPA_IMM_CMD_DESC;
desc[1].callback = NULL;
desc[1].user1 = NULL;
desc[1].user2 = 0;
desc[1].pyld = cmd_pyld->data;
desc[1].len = cmd_pyld->len;
if (ipa3_send_cmd(2, desc)) {
IPAERR("Fail to send immediate command\n");
result = -EPERM;
goto destroy_imm_cmd;
}
ipa3_ctx->nat_mem.size_base_tables = 0;
ipa3_ctx->nat_mem.size_expansion_tables = 0;
ipa3_ctx->nat_mem.public_ip_addr = 0;
ipa3_ctx->nat_mem.ipv4_rules_addr = 0;
ipa3_ctx->nat_mem.ipv4_expansion_rules_addr = 0;
ipa3_ctx->nat_mem.index_table_addr = 0;
ipa3_ctx->nat_mem.index_table_expansion_addr = 0;
ipa3_nat_free_mem_and_device(&ipa3_ctx->nat_mem);
IPADBG("return\n");
result = 0;
destroy_imm_cmd:
ipahal_destroy_imm_cmd(cmd_pyld);
destroy_regwrt_imm_cmd:
ipahal_destroy_imm_cmd(nop_cmd_pyld);
bail:
return result;
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,303 @@
/* Copyright (c) 2013-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef IPA_QMI_SERVICE_H
#define IPA_QMI_SERVICE_H
#include <linux/ipa.h>
#include <linux/ipa_qmi_service_v01.h>
#include <uapi/linux/msm_rmnet.h>
#include <soc/qcom/msm_qmi_interface.h>
#include "ipa_i.h"
#include <linux/rmnet_ipa_fd_ioctl.h>
/**
* name of the DL wwan default routing tables for v4 and v6
*/
#define IPA_A7_QMAP_HDR_NAME "ipa_qmap_hdr"
#define IPA_DFLT_WAN_RT_TBL_NAME "ipa_dflt_wan_rt"
#define MAX_NUM_Q6_RULE 35
#define MAX_NUM_QMI_RULE_CACHE 10
#define DEV_NAME "ipa-wan"
#define SUBSYS_MODEM "modem"
#define IPAWANDBG(fmt, args...) \
do { \
pr_debug(DEV_NAME " %s:%d " fmt, __func__, __LINE__, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf(), \
DEV_NAME " %s:%d " fmt, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
DEV_NAME " %s:%d " fmt, ## args); \
} while (0)
#define IPAWANDBG_LOW(fmt, args...) \
do { \
pr_debug(DEV_NAME " %s:%d " fmt, __func__, __LINE__, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
DEV_NAME " %s:%d " fmt, ## args); \
} while (0)
#define IPAWANERR(fmt, args...) \
do { \
pr_err(DEV_NAME " %s:%d " fmt, __func__, __LINE__, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf(), \
DEV_NAME " %s:%d " fmt, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
DEV_NAME " %s:%d " fmt, ## args); \
} while (0)
#define IPAWANINFO(fmt, args...) \
do { \
pr_info(DEV_NAME " %s:%d " fmt, __func__, __LINE__, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf(), \
DEV_NAME " %s:%d " fmt, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
DEV_NAME " %s:%d " fmt, ## args); \
} while (0)
extern struct ipa3_qmi_context *ipa3_qmi_ctx;
struct ipa3_qmi_context {
struct ipa_ioc_ext_intf_prop q6_ul_filter_rule[MAX_NUM_Q6_RULE];
u32 q6_ul_filter_rule_hdl[MAX_NUM_Q6_RULE];
int num_ipa_install_fltr_rule_req_msg;
struct ipa_install_fltr_rule_req_msg_v01
ipa_install_fltr_rule_req_msg_cache[MAX_NUM_QMI_RULE_CACHE];
int num_ipa_fltr_installed_notif_req_msg;
struct ipa_fltr_installed_notif_req_msg_v01
ipa_fltr_installed_notif_req_msg_cache[MAX_NUM_QMI_RULE_CACHE];
bool modem_cfg_emb_pipe_flt;
};
struct ipa3_rmnet_mux_val {
uint32_t mux_id;
int8_t vchannel_name[IFNAMSIZ];
bool mux_channel_set;
bool ul_flt_reg;
bool mux_hdr_set;
uint32_t hdr_hdl;
};
extern struct elem_info ipa3_init_modem_driver_req_msg_data_v01_ei[];
extern struct elem_info ipa3_init_modem_driver_resp_msg_data_v01_ei[];
extern struct elem_info ipa3_indication_reg_req_msg_data_v01_ei[];
extern struct elem_info ipa3_indication_reg_resp_msg_data_v01_ei[];
extern struct elem_info ipa3_master_driver_init_complt_ind_msg_data_v01_ei[];
extern struct elem_info ipa3_install_fltr_rule_req_msg_data_v01_ei[];
extern struct elem_info ipa3_install_fltr_rule_resp_msg_data_v01_ei[];
extern struct elem_info ipa3_fltr_installed_notif_req_msg_data_v01_ei[];
extern struct elem_info ipa3_fltr_installed_notif_resp_msg_data_v01_ei[];
extern struct elem_info ipa3_enable_force_clear_datapath_req_msg_data_v01_ei[];
extern struct elem_info ipa3_enable_force_clear_datapath_resp_msg_data_v01_ei[];
extern struct elem_info ipa3_disable_force_clear_datapath_req_msg_data_v01_ei[];
extern struct elem_info
ipa3_disable_force_clear_datapath_resp_msg_data_v01_ei[];
extern struct elem_info ipa3_config_req_msg_data_v01_ei[];
extern struct elem_info ipa3_config_resp_msg_data_v01_ei[];
extern struct elem_info ipa3_get_data_stats_req_msg_data_v01_ei[];
extern struct elem_info ipa3_get_data_stats_resp_msg_data_v01_ei[];
extern struct elem_info ipa3_get_apn_data_stats_req_msg_data_v01_ei[];
extern struct elem_info ipa3_get_apn_data_stats_resp_msg_data_v01_ei[];
extern struct elem_info ipa3_set_data_usage_quota_req_msg_data_v01_ei[];
extern struct elem_info ipa3_set_data_usage_quota_resp_msg_data_v01_ei[];
extern struct elem_info ipa3_data_usage_quota_reached_ind_msg_data_v01_ei[];
extern struct elem_info ipa3_stop_data_usage_quota_req_msg_data_v01_ei[];
extern struct elem_info ipa3_stop_data_usage_quota_resp_msg_data_v01_ei[];
extern struct elem_info ipa3_init_modem_driver_cmplt_req_msg_data_v01_ei[];
extern struct elem_info ipa3_init_modem_driver_cmplt_resp_msg_data_v01_ei[];
/**
* struct ipa3_rmnet_context - IPA rmnet context
* @ipa_rmnet_ssr: support modem SSR
* @polling_interval: Requested interval for polling tethered statistics
* @metered_mux_id: The mux ID on which quota has been set
*/
struct ipa3_rmnet_context {
bool ipa_rmnet_ssr;
u64 polling_interval;
u32 metered_mux_id;
};
extern struct ipa3_rmnet_context ipa3_rmnet_ctx;
#ifdef CONFIG_RMNET_IPA3
int ipa3_qmi_service_init(uint32_t wan_platform_type);
void ipa3_qmi_service_exit(void);
/* sending filter-install-request to modem*/
int ipa3_qmi_filter_request_send(
struct ipa_install_fltr_rule_req_msg_v01 *req);
/* sending filter-installed-notify-request to modem*/
int ipa3_qmi_filter_notify_send(struct ipa_fltr_installed_notif_req_msg_v01
*req);
/* voting for bus BW to ipa_rm*/
int ipa3_vote_for_bus_bw(uint32_t *bw_mbps);
int ipa3_qmi_enable_force_clear_datapath_send(
struct ipa_enable_force_clear_datapath_req_msg_v01 *req);
int ipa3_qmi_disable_force_clear_datapath_send(
struct ipa_disable_force_clear_datapath_req_msg_v01 *req);
int ipa3_copy_ul_filter_rule_to_ipa(struct ipa_install_fltr_rule_req_msg_v01
*rule_req);
int ipa3_wwan_update_mux_channel_prop(void);
int ipa3_wan_ioctl_init(void);
void ipa3_wan_ioctl_stop_qmi_messages(void);
void ipa3_wan_ioctl_enable_qmi_messages(void);
void ipa3_wan_ioctl_deinit(void);
void ipa3_qmi_stop_workqueues(void);
int rmnet_ipa3_poll_tethering_stats(struct wan_ioctl_poll_tethering_stats
*data);
int rmnet_ipa3_set_data_quota(struct wan_ioctl_set_data_quota *data);
void ipa3_broadcast_quota_reach_ind(uint32_t mux_id);
int rmnet_ipa3_set_tether_client_pipe(struct wan_ioctl_set_tether_client_pipe
*data);
int rmnet_ipa3_query_tethering_stats(struct wan_ioctl_query_tether_stats *data,
bool reset);
int ipa3_qmi_get_data_stats(struct ipa_get_data_stats_req_msg_v01 *req,
struct ipa_get_data_stats_resp_msg_v01 *resp);
int ipa3_qmi_get_network_stats(struct ipa_get_apn_data_stats_req_msg_v01 *req,
struct ipa_get_apn_data_stats_resp_msg_v01 *resp);
int ipa3_qmi_set_data_quota(struct ipa_set_data_usage_quota_req_msg_v01 *req);
int ipa3_qmi_stop_data_qouta(void);
void ipa3_q6_handshake_complete(bool ssr_bootup);
#else /* CONFIG_RMNET_IPA3 */
static inline int ipa3_qmi_service_init(uint32_t wan_platform_type)
{
return -EPERM;
}
static inline void ipa3_qmi_service_exit(void) { }
/* sending filter-install-request to modem*/
static inline int ipa3_qmi_filter_request_send(
struct ipa_install_fltr_rule_req_msg_v01 *req)
{
return -EPERM;
}
/* sending filter-installed-notify-request to modem*/
static inline int ipa3_qmi_filter_notify_send(
struct ipa_fltr_installed_notif_req_msg_v01 *req)
{
return -EPERM;
}
static inline int ipa3_qmi_enable_force_clear_datapath_send(
struct ipa_enable_force_clear_datapath_req_msg_v01 *req)
{
return -EPERM;
}
static inline int ipa3_qmi_disable_force_clear_datapath_send(
struct ipa_disable_force_clear_datapath_req_msg_v01 *req)
{
return -EPERM;
}
static inline int ipa3_copy_ul_filter_rule_to_ipa(
struct ipa_install_fltr_rule_req_msg_v01 *rule_req)
{
return -EPERM;
}
static inline int ipa3_wwan_update_mux_channel_prop(void)
{
return -EPERM;
}
static inline int ipa3_wan_ioctl_init(void)
{
return -EPERM;
}
static inline void ipa3_wan_ioctl_stop_qmi_messages(void) { }
static inline void ipa3_wan_ioctl_enable_qmi_messages(void) { }
static inline void ipa3_wan_ioctl_deinit(void) { }
static inline void ipa3_qmi_stop_workqueues(void) { }
static inline int ipa3_vote_for_bus_bw(uint32_t *bw_mbps)
{
return -EPERM;
}
static inline int rmnet_ipa3_poll_tethering_stats(
struct wan_ioctl_poll_tethering_stats *data)
{
return -EPERM;
}
static inline int rmnet_ipa3_set_data_quota(
struct wan_ioctl_set_data_quota *data)
{
return -EPERM;
}
static inline void ipa3_broadcast_quota_reach_ind(uint32_t mux_id) { }
static inline int ipa3_qmi_get_data_stats(
struct ipa_get_data_stats_req_msg_v01 *req,
struct ipa_get_data_stats_resp_msg_v01 *resp)
{
return -EPERM;
}
static inline int ipa3_qmi_get_network_stats(
struct ipa_get_apn_data_stats_req_msg_v01 *req,
struct ipa_get_apn_data_stats_resp_msg_v01 *resp)
{
return -EPERM;
}
static inline int ipa3_qmi_set_data_quota(
struct ipa_set_data_usage_quota_req_msg_v01 *req)
{
return -EPERM;
}
static inline int ipa3_qmi_stop_data_qouta(void)
{
return -EPERM;
}
static inline void ipa3_q6_handshake_complete(bool ssr_bootup) { }
#endif /* CONFIG_RMNET_IPA3 */
#endif /* IPA_QMI_SERVICE_H */

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,153 @@
/* Copyright (c) 2015-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#undef TRACE_SYSTEM
#define TRACE_SYSTEM ipa
#define TRACE_INCLUDE_FILE ipa_trace
#if !defined(_IPA_TRACE_H) || defined(TRACE_HEADER_MULTI_READ)
#define _IPA_TRACE_H
#include <linux/tracepoint.h>
TRACE_EVENT(
intr_to_poll3,
TP_PROTO(unsigned long client),
TP_ARGS(client),
TP_STRUCT__entry(
__field(unsigned long, client)
),
TP_fast_assign(
__entry->client = client;
),
TP_printk("client=%lu", __entry->client)
);
TRACE_EVENT(
poll_to_intr3,
TP_PROTO(unsigned long client),
TP_ARGS(client),
TP_STRUCT__entry(
__field(unsigned long, client)
),
TP_fast_assign(
__entry->client = client;
),
TP_printk("client=%lu", __entry->client)
);
TRACE_EVENT(
idle_sleep_enter3,
TP_PROTO(unsigned long client),
TP_ARGS(client),
TP_STRUCT__entry(
__field(unsigned long, client)
),
TP_fast_assign(
__entry->client = client;
),
TP_printk("client=%lu", __entry->client)
);
TRACE_EVENT(
idle_sleep_exit3,
TP_PROTO(unsigned long client),
TP_ARGS(client),
TP_STRUCT__entry(
__field(unsigned long, client)
),
TP_fast_assign(
__entry->client = client;
),
TP_printk("client=%lu", __entry->client)
);
TRACE_EVENT(
rmnet_ipa_netifni3,
TP_PROTO(unsigned long rx_pkt_cnt),
TP_ARGS(rx_pkt_cnt),
TP_STRUCT__entry(
__field(unsigned long, rx_pkt_cnt)
),
TP_fast_assign(
__entry->rx_pkt_cnt = rx_pkt_cnt;
),
TP_printk("rx_pkt_cnt=%lu", __entry->rx_pkt_cnt)
);
TRACE_EVENT(
rmnet_ipa_netifrx3,
TP_PROTO(unsigned long rx_pkt_cnt),
TP_ARGS(rx_pkt_cnt),
TP_STRUCT__entry(
__field(unsigned long, rx_pkt_cnt)
),
TP_fast_assign(
__entry->rx_pkt_cnt = rx_pkt_cnt;
),
TP_printk("rx_pkt_cnt=%lu", __entry->rx_pkt_cnt)
);
TRACE_EVENT(
rmnet_ipa_netif_rcv_skb3,
TP_PROTO(unsigned long rx_pkt_cnt),
TP_ARGS(rx_pkt_cnt),
TP_STRUCT__entry(
__field(unsigned long, rx_pkt_cnt)
),
TP_fast_assign(
__entry->rx_pkt_cnt = rx_pkt_cnt;
),
TP_printk("rx_pkt_cnt=%lu", __entry->rx_pkt_cnt)
);
#endif /* _IPA_TRACE_H */
/* This part must be outside protection */
#undef TRACE_INCLUDE_PATH
#define TRACE_INCLUDE_PATH .
#include <trace/define_trace.h>

View File

@@ -0,0 +1,991 @@
/* Copyright (c) 2012-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include "ipa_i.h"
#include <linux/delay.h>
#define IPA_RAM_UC_SMEM_SIZE 128
#define IPA_HW_INTERFACE_VERSION 0x2000
#define IPA_PKT_FLUSH_TO_US 100
#define IPA_UC_POLL_SLEEP_USEC 100
#define IPA_UC_POLL_MAX_RETRY 10000
/**
* Mailbox register to Interrupt HWP for CPU cmd
* Usage of IPA_UC_MAILBOX_m_n doorbell instead of IPA_IRQ_EE_UC_0
* due to HW limitation.
*
*/
#define IPA_CPU_2_HW_CMD_MBOX_m 0
#define IPA_CPU_2_HW_CMD_MBOX_n 23
/**
* enum ipa3_cpu_2_hw_commands - Values that represent the commands from the CPU
* IPA_CPU_2_HW_CMD_NO_OP : No operation is required.
* IPA_CPU_2_HW_CMD_UPDATE_FLAGS : Update SW flags which defines the behavior
* of HW.
* IPA_CPU_2_HW_CMD_DEBUG_RUN_TEST : Launch predefined test over HW.
* IPA_CPU_2_HW_CMD_DEBUG_GET_INFO : Read HW internal debug information.
* IPA_CPU_2_HW_CMD_ERR_FATAL : CPU instructs HW to perform error fatal
* handling.
* IPA_CPU_2_HW_CMD_CLK_GATE : CPU instructs HW to goto Clock Gated state.
* IPA_CPU_2_HW_CMD_CLK_UNGATE : CPU instructs HW to goto Clock Ungated state.
* IPA_CPU_2_HW_CMD_MEMCPY : CPU instructs HW to do memcopy using QMB.
* IPA_CPU_2_HW_CMD_RESET_PIPE : Command to reset a pipe - SW WA for a HW bug.
* IPA_CPU_2_HW_CMD_GSI_CH_EMPTY : Command to check for GSI channel emptiness.
*/
enum ipa3_cpu_2_hw_commands {
IPA_CPU_2_HW_CMD_NO_OP =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 0),
IPA_CPU_2_HW_CMD_UPDATE_FLAGS =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 1),
IPA_CPU_2_HW_CMD_DEBUG_RUN_TEST =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 2),
IPA_CPU_2_HW_CMD_DEBUG_GET_INFO =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 3),
IPA_CPU_2_HW_CMD_ERR_FATAL =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 4),
IPA_CPU_2_HW_CMD_CLK_GATE =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 5),
IPA_CPU_2_HW_CMD_CLK_UNGATE =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 6),
IPA_CPU_2_HW_CMD_MEMCPY =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 7),
IPA_CPU_2_HW_CMD_RESET_PIPE =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 8),
IPA_CPU_2_HW_CMD_REG_WRITE =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 9),
IPA_CPU_2_HW_CMD_GSI_CH_EMPTY =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 10),
};
/**
* enum ipa3_hw_2_cpu_responses - Values that represent common HW responses
* to CPU commands.
* @IPA_HW_2_CPU_RESPONSE_NO_OP : No operation response
* @IPA_HW_2_CPU_RESPONSE_INIT_COMPLETED : HW shall send this command once
* boot sequence is completed and HW is ready to serve commands from CPU
* @IPA_HW_2_CPU_RESPONSE_CMD_COMPLETED: Response to CPU commands
* @IPA_HW_2_CPU_RESPONSE_DEBUG_GET_INFO : Response to
* IPA_CPU_2_HW_CMD_DEBUG_GET_INFO command
*/
enum ipa3_hw_2_cpu_responses {
IPA_HW_2_CPU_RESPONSE_NO_OP =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 0),
IPA_HW_2_CPU_RESPONSE_INIT_COMPLETED =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 1),
IPA_HW_2_CPU_RESPONSE_CMD_COMPLETED =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 2),
IPA_HW_2_CPU_RESPONSE_DEBUG_GET_INFO =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 3),
};
/**
* struct IpaHwResetPipeCmdData_t - Structure holding the parameters
* for IPA_CPU_2_HW_CMD_MEMCPY command.
*
* The parameters are passed as immediate params in the shared memory
*/
struct IpaHwMemCopyData_t {
u32 destination_addr;
u32 source_addr;
u32 dest_buffer_size;
u32 source_buffer_size;
};
/**
* union IpaHwResetPipeCmdData_t - Structure holding the parameters
* for IPA_CPU_2_HW_CMD_RESET_PIPE command.
* @pipeNum : Pipe number to be reset
* @direction : 1 - IPA Producer, 0 - IPA Consumer
* @reserved_02_03 : Reserved
*
* The parameters are passed as immediate params in the shared memory
*/
union IpaHwResetPipeCmdData_t {
struct IpaHwResetPipeCmdParams_t {
u8 pipeNum;
u8 direction;
u32 reserved_02_03;
} __packed params;
u32 raw32b;
} __packed;
/**
* struct IpaHwRegWriteCmdData_t - holds the parameters for
* IPA_CPU_2_HW_CMD_REG_WRITE command. Parameters are
* sent as 64b immediate parameters.
* @RegisterAddress: RG10 register address where the value needs to be written
* @RegisterValue: 32-Bit value to be written into the register
*/
struct IpaHwRegWriteCmdData_t {
u32 RegisterAddress;
u32 RegisterValue;
};
/**
* union IpaHwCpuCmdCompletedResponseData_t - Structure holding the parameters
* for IPA_HW_2_CPU_RESPONSE_CMD_COMPLETED response.
* @originalCmdOp : The original command opcode
* @status : 0 for success indication, otherwise failure
* @reserved : Reserved
*
* Parameters are sent as 32b immediate parameters.
*/
union IpaHwCpuCmdCompletedResponseData_t {
struct IpaHwCpuCmdCompletedResponseParams_t {
u32 originalCmdOp:8;
u32 status:8;
u32 reserved:16;
} __packed params;
u32 raw32b;
} __packed;
/**
* union IpaHwUpdateFlagsCmdData_t - Structure holding the parameters for
* IPA_CPU_2_HW_CMD_UPDATE_FLAGS command
* @newFlags: SW flags defined the behavior of HW.
* This field is expected to be used as bitmask for enum ipa3_hw_flags
*/
union IpaHwUpdateFlagsCmdData_t {
struct IpaHwUpdateFlagsCmdParams_t {
u32 newFlags;
} params;
u32 raw32b;
};
/**
* union IpaHwChkChEmptyCmdData_t - Structure holding the parameters for
* IPA_CPU_2_HW_CMD_GSI_CH_EMPTY command. Parameters are sent as 32b
* immediate parameters.
* @ee_n : EE owner of the channel
* @vir_ch_id : GSI virtual channel ID of the channel to checked of emptiness
* @reserved_02_04 : Reserved
*/
union IpaHwChkChEmptyCmdData_t {
struct IpaHwChkChEmptyCmdParams_t {
u8 ee_n;
u8 vir_ch_id;
u16 reserved_02_04;
} __packed params;
u32 raw32b;
} __packed;
/**
* When resource group 10 limitation mitigation is enabled, uC send
* cmd should be able to run in interrupt context, so using spin lock
* instead of mutex.
*/
#define IPA3_UC_LOCK(flags) \
do { \
if (ipa3_ctx->apply_rg10_wa) \
spin_lock_irqsave(&ipa3_ctx->uc_ctx.uc_spinlock, flags); \
else \
mutex_lock(&ipa3_ctx->uc_ctx.uc_lock); \
} while (0)
#define IPA3_UC_UNLOCK(flags) \
do { \
if (ipa3_ctx->apply_rg10_wa) \
spin_unlock_irqrestore(&ipa3_ctx->uc_ctx.uc_spinlock, flags); \
else \
mutex_unlock(&ipa3_ctx->uc_ctx.uc_lock); \
} while (0)
struct ipa3_uc_hdlrs ipa3_uc_hdlrs[IPA_HW_NUM_FEATURES] = { { 0 } };
const char *ipa_hw_error_str(enum ipa3_hw_errors err_type)
{
const char *str;
switch (err_type) {
case IPA_HW_ERROR_NONE:
str = "IPA_HW_ERROR_NONE";
break;
case IPA_HW_INVALID_DOORBELL_ERROR:
str = "IPA_HW_INVALID_DOORBELL_ERROR";
break;
case IPA_HW_DMA_ERROR:
str = "IPA_HW_DMA_ERROR";
break;
case IPA_HW_FATAL_SYSTEM_ERROR:
str = "IPA_HW_FATAL_SYSTEM_ERROR";
break;
case IPA_HW_INVALID_OPCODE:
str = "IPA_HW_INVALID_OPCODE";
break;
case IPA_HW_INVALID_PARAMS:
str = "IPA_HW_INVALID_PARAMS";
break;
case IPA_HW_CONS_DISABLE_CMD_GSI_STOP_FAILURE:
str = "IPA_HW_CONS_DISABLE_CMD_GSI_STOP_FAILURE";
break;
case IPA_HW_PROD_DISABLE_CMD_GSI_STOP_FAILURE:
str = "IPA_HW_PROD_DISABLE_CMD_GSI_STOP_FAILURE";
break;
case IPA_HW_GSI_CH_NOT_EMPTY_FAILURE:
str = "IPA_HW_GSI_CH_NOT_EMPTY_FAILURE";
break;
default:
str = "INVALID ipa_hw_errors type";
}
return str;
}
static void ipa3_log_evt_hdlr(void)
{
int i;
if (!ipa3_ctx->uc_ctx.uc_event_top_ofst) {
ipa3_ctx->uc_ctx.uc_event_top_ofst =
ipa3_ctx->uc_ctx.uc_sram_mmio->eventParams;
if (ipa3_ctx->uc_ctx.uc_event_top_ofst +
sizeof(struct IpaHwEventLogInfoData_t) >=
ipa3_ctx->ctrl->ipa_reg_base_ofst +
ipahal_get_reg_n_ofst(IPA_SRAM_DIRECT_ACCESS_n, 0) +
ipa3_ctx->smem_sz) {
IPAERR("uc_top 0x%x outside SRAM\n",
ipa3_ctx->uc_ctx.uc_event_top_ofst);
goto bad_uc_top_ofst;
}
ipa3_ctx->uc_ctx.uc_event_top_mmio = ioremap(
ipa3_ctx->ipa_wrapper_base +
ipa3_ctx->uc_ctx.uc_event_top_ofst,
sizeof(struct IpaHwEventLogInfoData_t));
if (!ipa3_ctx->uc_ctx.uc_event_top_mmio) {
IPAERR("fail to ioremap uc top\n");
goto bad_uc_top_ofst;
}
for (i = 0; i < IPA_HW_NUM_FEATURES; i++) {
if (ipa3_uc_hdlrs[i].ipa_uc_event_log_info_hdlr)
ipa3_uc_hdlrs[i].ipa_uc_event_log_info_hdlr
(ipa3_ctx->uc_ctx.uc_event_top_mmio);
}
} else {
if (ipa3_ctx->uc_ctx.uc_sram_mmio->eventParams !=
ipa3_ctx->uc_ctx.uc_event_top_ofst) {
IPAERR("uc top ofst changed new=%u cur=%u\n",
ipa3_ctx->uc_ctx.uc_sram_mmio->
eventParams,
ipa3_ctx->uc_ctx.uc_event_top_ofst);
}
}
return;
bad_uc_top_ofst:
ipa3_ctx->uc_ctx.uc_event_top_ofst = 0;
}
/**
* ipa3_uc_state_check() - Check the status of the uC interface
*
* Return value: 0 if the uC is loaded, interface is initialized
* and there was no recent failure in one of the commands.
* A negative value is returned otherwise.
*/
int ipa3_uc_state_check(void)
{
if (!ipa3_ctx->uc_ctx.uc_inited) {
IPAERR("uC interface not initialized\n");
return -EFAULT;
}
if (!ipa3_ctx->uc_ctx.uc_loaded) {
IPAERR("uC is not loaded\n");
return -EFAULT;
}
if (ipa3_ctx->uc_ctx.uc_failed) {
IPAERR("uC has failed its last command\n");
return -EFAULT;
}
return 0;
}
/**
* ipa3_uc_loaded_check() - Check the uC has been loaded
*
* Return value: 1 if the uC is loaded, 0 otherwise
*/
int ipa3_uc_loaded_check(void)
{
return ipa3_ctx->uc_ctx.uc_loaded;
}
EXPORT_SYMBOL(ipa3_uc_loaded_check);
static void ipa3_uc_event_handler(enum ipa_irq_type interrupt,
void *private_data,
void *interrupt_data)
{
union IpaHwErrorEventData_t evt;
u8 feature;
WARN_ON(private_data != ipa3_ctx);
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
IPADBG("uC evt opcode=%u\n",
ipa3_ctx->uc_ctx.uc_sram_mmio->eventOp);
feature = EXTRACT_UC_FEATURE(ipa3_ctx->uc_ctx.uc_sram_mmio->eventOp);
if (0 > feature || IPA_HW_FEATURE_MAX <= feature) {
IPAERR("Invalid feature %u for event %u\n",
feature, ipa3_ctx->uc_ctx.uc_sram_mmio->eventOp);
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
return;
}
/* Feature specific handling */
if (ipa3_uc_hdlrs[feature].ipa_uc_event_hdlr)
ipa3_uc_hdlrs[feature].ipa_uc_event_hdlr
(ipa3_ctx->uc_ctx.uc_sram_mmio);
/* General handling */
if (ipa3_ctx->uc_ctx.uc_sram_mmio->eventOp ==
IPA_HW_2_CPU_EVENT_ERROR) {
evt.raw32b = ipa3_ctx->uc_ctx.uc_sram_mmio->eventParams;
IPAERR("uC Error, evt errorType = %s\n",
ipa_hw_error_str(evt.params.errorType));
ipa3_ctx->uc_ctx.uc_failed = true;
ipa3_ctx->uc_ctx.uc_error_type = evt.params.errorType;
ipa3_ctx->uc_ctx.uc_error_timestamp =
ipahal_read_reg(IPA_TAG_TIMER);
BUG();
} else if (ipa3_ctx->uc_ctx.uc_sram_mmio->eventOp ==
IPA_HW_2_CPU_EVENT_LOG_INFO) {
IPADBG("uC evt log info ofst=0x%x\n",
ipa3_ctx->uc_ctx.uc_sram_mmio->eventParams);
ipa3_log_evt_hdlr();
} else {
IPADBG("unsupported uC evt opcode=%u\n",
ipa3_ctx->uc_ctx.uc_sram_mmio->eventOp);
}
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
}
int ipa3_uc_panic_notifier(struct notifier_block *this,
unsigned long event, void *ptr)
{
int result = 0;
struct ipa_active_client_logging_info log_info;
IPADBG("this=%p evt=%lu ptr=%p\n", this, event, ptr);
result = ipa3_uc_state_check();
if (result)
goto fail;
IPA_ACTIVE_CLIENTS_PREP_SIMPLE(log_info);
if (ipa3_inc_client_enable_clks_no_block(&log_info))
goto fail;
ipa3_ctx->uc_ctx.uc_sram_mmio->cmdOp =
IPA_CPU_2_HW_CMD_ERR_FATAL;
ipa3_ctx->uc_ctx.pending_cmd = ipa3_ctx->uc_ctx.uc_sram_mmio->cmdOp;
/* ensure write to shared memory is done before triggering uc */
wmb();
if (ipa3_ctx->apply_rg10_wa)
ipahal_write_reg_mn(IPA_UC_MAILBOX_m_n,
IPA_CPU_2_HW_CMD_MBOX_m,
IPA_CPU_2_HW_CMD_MBOX_n, 0x1);
else
ipahal_write_reg_n(IPA_IRQ_EE_UC_n, 0, 0x1);
/* give uc enough time to save state */
udelay(IPA_PKT_FLUSH_TO_US);
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
IPADBG("err_fatal issued\n");
fail:
return NOTIFY_DONE;
}
static void ipa3_uc_response_hdlr(enum ipa_irq_type interrupt,
void *private_data,
void *interrupt_data)
{
union IpaHwCpuCmdCompletedResponseData_t uc_rsp;
u8 feature;
int res;
int i;
WARN_ON(private_data != ipa3_ctx);
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
IPADBG("uC rsp opcode=%u\n",
ipa3_ctx->uc_ctx.uc_sram_mmio->responseOp);
feature = EXTRACT_UC_FEATURE(ipa3_ctx->uc_ctx.uc_sram_mmio->responseOp);
if (0 > feature || IPA_HW_FEATURE_MAX <= feature) {
IPAERR("Invalid feature %u for event %u\n",
feature, ipa3_ctx->uc_ctx.uc_sram_mmio->eventOp);
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
return;
}
/* Feature specific handling */
if (ipa3_uc_hdlrs[feature].ipa3_uc_response_hdlr) {
res = ipa3_uc_hdlrs[feature].ipa3_uc_response_hdlr(
ipa3_ctx->uc_ctx.uc_sram_mmio,
&ipa3_ctx->uc_ctx.uc_status);
if (res == 0) {
IPADBG("feature %d specific response handler\n",
feature);
complete_all(&ipa3_ctx->uc_ctx.uc_completion);
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
return;
}
}
/* General handling */
if (ipa3_ctx->uc_ctx.uc_sram_mmio->responseOp ==
IPA_HW_2_CPU_RESPONSE_INIT_COMPLETED) {
ipa3_ctx->uc_ctx.uc_loaded = true;
IPADBG("IPA uC loaded\n");
/*
* The proxy vote is held until uC is loaded to ensure that
* IPA_HW_2_CPU_RESPONSE_INIT_COMPLETED is received.
*/
ipa3_proxy_clk_unvote();
for (i = 0; i < IPA_HW_NUM_FEATURES; i++) {
if (ipa3_uc_hdlrs[i].ipa_uc_loaded_hdlr)
ipa3_uc_hdlrs[i].ipa_uc_loaded_hdlr();
}
} else if (ipa3_ctx->uc_ctx.uc_sram_mmio->responseOp ==
IPA_HW_2_CPU_RESPONSE_CMD_COMPLETED) {
uc_rsp.raw32b = ipa3_ctx->uc_ctx.uc_sram_mmio->responseParams;
IPADBG("uC cmd response opcode=%u status=%u\n",
uc_rsp.params.originalCmdOp,
uc_rsp.params.status);
if (uc_rsp.params.originalCmdOp ==
ipa3_ctx->uc_ctx.pending_cmd) {
ipa3_ctx->uc_ctx.uc_status = uc_rsp.params.status;
complete_all(&ipa3_ctx->uc_ctx.uc_completion);
} else {
IPAERR("Expected cmd=%u rcvd cmd=%u\n",
ipa3_ctx->uc_ctx.pending_cmd,
uc_rsp.params.originalCmdOp);
}
} else {
IPAERR("Unsupported uC rsp opcode = %u\n",
ipa3_ctx->uc_ctx.uc_sram_mmio->responseOp);
}
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
}
static int ipa3_uc_send_cmd_64b_param(u32 cmd_lo, u32 cmd_hi, u32 opcode,
u32 expected_status, bool polling_mode, unsigned long timeout_jiffies)
{
int index;
union IpaHwCpuCmdCompletedResponseData_t uc_rsp;
unsigned long flags;
int retries = 0;
send_cmd_lock:
IPA3_UC_LOCK(flags);
if (ipa3_uc_state_check()) {
IPADBG("uC send command aborted\n");
IPA3_UC_UNLOCK(flags);
return -EBADF;
}
send_cmd:
if (ipa3_ctx->apply_rg10_wa) {
if (!polling_mode)
IPADBG("Overriding mode to polling mode\n");
polling_mode = true;
} else {
init_completion(&ipa3_ctx->uc_ctx.uc_completion);
}
ipa3_ctx->uc_ctx.uc_sram_mmio->cmdParams = cmd_lo;
ipa3_ctx->uc_ctx.uc_sram_mmio->cmdParams_hi = cmd_hi;
ipa3_ctx->uc_ctx.uc_sram_mmio->cmdOp = opcode;
ipa3_ctx->uc_ctx.pending_cmd = opcode;
ipa3_ctx->uc_ctx.uc_sram_mmio->responseOp = 0;
ipa3_ctx->uc_ctx.uc_sram_mmio->responseParams = 0;
ipa3_ctx->uc_ctx.uc_status = 0;
/* ensure write to shared memory is done before triggering uc */
wmb();
if (ipa3_ctx->apply_rg10_wa)
ipahal_write_reg_mn(IPA_UC_MAILBOX_m_n,
IPA_CPU_2_HW_CMD_MBOX_m,
IPA_CPU_2_HW_CMD_MBOX_n, 0x1);
else
ipahal_write_reg_n(IPA_IRQ_EE_UC_n, 0, 0x1);
if (polling_mode) {
for (index = 0; index < IPA_UC_POLL_MAX_RETRY; index++) {
if (ipa3_ctx->uc_ctx.uc_sram_mmio->responseOp ==
IPA_HW_2_CPU_RESPONSE_CMD_COMPLETED) {
uc_rsp.raw32b = ipa3_ctx->uc_ctx.uc_sram_mmio->
responseParams;
if (uc_rsp.params.originalCmdOp ==
ipa3_ctx->uc_ctx.pending_cmd) {
ipa3_ctx->uc_ctx.uc_status =
uc_rsp.params.status;
break;
}
}
if (ipa3_ctx->apply_rg10_wa)
udelay(IPA_UC_POLL_SLEEP_USEC);
else
usleep_range(IPA_UC_POLL_SLEEP_USEC,
IPA_UC_POLL_SLEEP_USEC);
}
if (index == IPA_UC_POLL_MAX_RETRY) {
IPAERR("uC max polling retries reached\n");
if (ipa3_ctx->uc_ctx.uc_failed) {
IPAERR("uC reported on Error, errorType = %s\n",
ipa_hw_error_str(ipa3_ctx->
uc_ctx.uc_error_type));
}
IPA3_UC_UNLOCK(flags);
BUG();
return -EFAULT;
}
} else {
if (wait_for_completion_timeout(&ipa3_ctx->uc_ctx.uc_completion,
timeout_jiffies) == 0) {
IPAERR("uC timed out\n");
if (ipa3_ctx->uc_ctx.uc_failed) {
IPAERR("uC reported on Error, errorType = %s\n",
ipa_hw_error_str(ipa3_ctx->
uc_ctx.uc_error_type));
}
IPA3_UC_UNLOCK(flags);
BUG();
return -EFAULT;
}
}
if (ipa3_ctx->uc_ctx.uc_status != expected_status) {
if (ipa3_ctx->uc_ctx.uc_status ==
IPA_HW_PROD_DISABLE_CMD_GSI_STOP_FAILURE) {
retries++;
if (retries == IPA_GSI_CHANNEL_STOP_MAX_RETRY) {
IPAERR("Failed after %d tries\n", retries);
IPA3_UC_UNLOCK(flags);
BUG();
return -EFAULT;
}
IPA3_UC_UNLOCK(flags);
ipa3_inject_dma_task_for_gsi();
/* sleep for short period to flush IPA */
usleep_range(IPA_GSI_CHANNEL_STOP_SLEEP_MIN_USEC,
IPA_GSI_CHANNEL_STOP_SLEEP_MAX_USEC);
goto send_cmd_lock;
}
if (ipa3_ctx->uc_ctx.uc_status ==
IPA_HW_GSI_CH_NOT_EMPTY_FAILURE) {
retries++;
if (retries >= IPA_GSI_CHANNEL_EMPTY_MAX_RETRY) {
IPAERR("Failed after %d tries\n", retries);
IPA3_UC_UNLOCK(flags);
return -EFAULT;
}
if (ipa3_ctx->apply_rg10_wa)
udelay(
IPA_GSI_CHANNEL_EMPTY_SLEEP_MAX_USEC / 2 +
IPA_GSI_CHANNEL_EMPTY_SLEEP_MIN_USEC / 2);
else
usleep_range(
IPA_GSI_CHANNEL_EMPTY_SLEEP_MIN_USEC,
IPA_GSI_CHANNEL_EMPTY_SLEEP_MAX_USEC);
goto send_cmd;
}
IPAERR("Recevied status %u, Expected status %u\n",
ipa3_ctx->uc_ctx.uc_status, expected_status);
IPA3_UC_UNLOCK(flags);
return -EFAULT;
}
IPA3_UC_UNLOCK(flags);
IPADBG("uC cmd %u send succeeded\n", opcode);
return 0;
}
/**
* ipa3_uc_interface_init() - Initialize the interface with the uC
*
* Return value: 0 on success, negative value otherwise
*/
int ipa3_uc_interface_init(void)
{
int result;
unsigned long phys_addr;
if (ipa3_ctx->uc_ctx.uc_inited) {
IPADBG("uC interface already initialized\n");
return 0;
}
mutex_init(&ipa3_ctx->uc_ctx.uc_lock);
spin_lock_init(&ipa3_ctx->uc_ctx.uc_spinlock);
phys_addr = ipa3_ctx->ipa_wrapper_base +
ipa3_ctx->ctrl->ipa_reg_base_ofst +
ipahal_get_reg_n_ofst(IPA_SRAM_DIRECT_ACCESS_n, 0);
ipa3_ctx->uc_ctx.uc_sram_mmio = ioremap(phys_addr,
IPA_RAM_UC_SMEM_SIZE);
if (!ipa3_ctx->uc_ctx.uc_sram_mmio) {
IPAERR("Fail to ioremap IPA uC SRAM\n");
result = -ENOMEM;
goto remap_fail;
}
if (!ipa3_ctx->apply_rg10_wa) {
result = ipa3_add_interrupt_handler(IPA_UC_IRQ_0,
ipa3_uc_event_handler, true,
ipa3_ctx);
if (result) {
IPAERR("Fail to register for UC_IRQ0 rsp interrupt\n");
result = -EFAULT;
goto irq_fail0;
}
result = ipa3_add_interrupt_handler(IPA_UC_IRQ_1,
ipa3_uc_response_hdlr, true,
ipa3_ctx);
if (result) {
IPAERR("fail to register for UC_IRQ1 rsp interrupt\n");
result = -EFAULT;
goto irq_fail1;
}
}
ipa3_ctx->uc_ctx.uc_inited = true;
IPADBG("IPA uC interface is initialized\n");
return 0;
irq_fail1:
ipa3_remove_interrupt_handler(IPA_UC_IRQ_0);
irq_fail0:
iounmap(ipa3_ctx->uc_ctx.uc_sram_mmio);
remap_fail:
return result;
}
/**
* ipa3_uc_load_notify() - Notification about uC loading
*
* This function should be called when IPA uC interface layer cannot
* determine by itself about uC loading by waits for external notification.
* Example is resource group 10 limitation were ipa driver does not get uC
* interrupts.
* The function should perform actions that were not done at init due to uC
* not being loaded then.
*/
void ipa3_uc_load_notify(void)
{
int i;
int result;
if (!ipa3_ctx->apply_rg10_wa)
return;
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
ipa3_ctx->uc_ctx.uc_loaded = true;
IPADBG("IPA uC loaded\n");
ipa3_proxy_clk_unvote();
ipa3_init_interrupts();
result = ipa3_add_interrupt_handler(IPA_UC_IRQ_0,
ipa3_uc_event_handler, true,
ipa3_ctx);
if (result)
IPAERR("Fail to register for UC_IRQ0 rsp interrupt.\n");
for (i = 0; i < IPA_HW_NUM_FEATURES; i++) {
if (ipa3_uc_hdlrs[i].ipa_uc_loaded_hdlr)
ipa3_uc_hdlrs[i].ipa_uc_loaded_hdlr();
}
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
}
EXPORT_SYMBOL(ipa3_uc_load_notify);
/**
* ipa3_uc_send_cmd() - Send a command to the uC
*
* Note1: This function sends command with 32bit parameter and do not
* use the higher 32bit of the command parameter (set to zero).
*
* Note2: In case the operation times out (No response from the uC) or
* polling maximal amount of retries has reached, the logic
* considers it as an invalid state of the uC/IPA, and
* issues a kernel panic.
*
* Returns: 0 on success.
* -EINVAL in case of invalid input.
* -EBADF in case uC interface is not initialized /
* or the uC has failed previously.
* -EFAULT in case the received status doesn't match
* the expected.
*/
int ipa3_uc_send_cmd(u32 cmd, u32 opcode, u32 expected_status,
bool polling_mode, unsigned long timeout_jiffies)
{
return ipa3_uc_send_cmd_64b_param(cmd, 0, opcode,
expected_status, polling_mode, timeout_jiffies);
}
/**
* ipa3_uc_register_handlers() - Registers event, response and log event
* handlers for a specific feature.Please note
* that currently only one handler can be
* registered per feature.
*
* Return value: None
*/
void ipa3_uc_register_handlers(enum ipa3_hw_features feature,
struct ipa3_uc_hdlrs *hdlrs)
{
unsigned long flags;
if (0 > feature || IPA_HW_FEATURE_MAX <= feature) {
IPAERR("Feature %u is invalid, not registering hdlrs\n",
feature);
return;
}
IPA3_UC_LOCK(flags);
ipa3_uc_hdlrs[feature] = *hdlrs;
IPA3_UC_UNLOCK(flags);
IPADBG("uC handlers registered for feature %u\n", feature);
}
/**
* ipa3_uc_reset_pipe() - reset a BAM pipe using the uC interface
* @ipa_client: [in] ipa client handle representing the pipe
*
* The function uses the uC interface in order to issue a BAM
* PIPE reset request. The uC makes sure there's no traffic in
* the TX command queue before issuing the reset.
*
* Returns: 0 on success, negative on failure
*/
int ipa3_uc_reset_pipe(enum ipa_client_type ipa_client)
{
union IpaHwResetPipeCmdData_t cmd;
int ep_idx;
int ret;
ep_idx = ipa3_get_ep_mapping(ipa_client);
if (ep_idx == -1) {
IPAERR("Invalid IPA client\n");
return 0;
}
/*
* If the uC interface has not been initialized yet,
* continue with the sequence without resetting the
* pipe.
*/
if (ipa3_uc_state_check()) {
IPADBG("uC interface will not be used to reset %s pipe %d\n",
IPA_CLIENT_IS_PROD(ipa_client) ? "CONS" : "PROD",
ep_idx);
return 0;
}
/*
* IPA consumer = 0, IPA producer = 1.
* IPA driver concept of PROD/CONS is the opposite of the
* IPA HW concept. Therefore, IPA AP CLIENT PRODUCER = IPA CONSUMER,
* and vice-versa.
*/
cmd.params.direction = (u8)(IPA_CLIENT_IS_PROD(ipa_client) ? 0 : 1);
cmd.params.pipeNum = (u8)ep_idx;
IPADBG("uC pipe reset on IPA %s pipe %d\n",
IPA_CLIENT_IS_PROD(ipa_client) ? "CONS" : "PROD", ep_idx);
ret = ipa3_uc_send_cmd(cmd.raw32b, IPA_CPU_2_HW_CMD_RESET_PIPE, 0,
false, 10*HZ);
return ret;
}
int ipa3_uc_is_gsi_channel_empty(enum ipa_client_type ipa_client)
{
struct ipa_gsi_ep_config *gsi_ep_info;
union IpaHwChkChEmptyCmdData_t cmd;
int ret;
gsi_ep_info = ipa3_get_gsi_ep_info(ipa3_get_ep_mapping(ipa_client));
if (!gsi_ep_info) {
IPAERR("Invalid IPA ep index\n");
return 0;
}
if (ipa3_uc_state_check()) {
IPADBG("uC cannot be used to validate ch emptiness clnt=%d\n"
, ipa_client);
return 0;
}
cmd.params.ee_n = gsi_ep_info->ee;
cmd.params.vir_ch_id = gsi_ep_info->ipa_gsi_chan_num;
IPADBG("uC emptiness check for IPA GSI Channel %d\n",
gsi_ep_info->ipa_gsi_chan_num);
ret = ipa3_uc_send_cmd(cmd.raw32b, IPA_CPU_2_HW_CMD_GSI_CH_EMPTY, 0,
false, 10*HZ);
return ret;
}
/**
* ipa3_uc_notify_clk_state() - notify to uC of clock enable / disable
* @enabled: true if clock are enabled
*
* The function uses the uC interface in order to notify uC before IPA clocks
* are disabled to make sure uC is not in the middle of operation.
* Also after clocks are enabled ned to notify uC to start processing.
*
* Returns: 0 on success, negative on failure
*/
int ipa3_uc_notify_clk_state(bool enabled)
{
u32 opcode;
/*
* If the uC interface has not been initialized yet,
* don't notify the uC on the enable/disable
*/
if (ipa3_uc_state_check()) {
IPADBG("uC interface will not notify the UC on clock state\n");
return 0;
}
IPADBG("uC clock %s notification\n", (enabled) ? "UNGATE" : "GATE");
opcode = (enabled) ? IPA_CPU_2_HW_CMD_CLK_UNGATE :
IPA_CPU_2_HW_CMD_CLK_GATE;
return ipa3_uc_send_cmd(0, opcode, 0, true, 0);
}
/**
* ipa3_uc_update_hw_flags() - send uC the HW flags to be used
* @flags: This field is expected to be used as bitmask for enum ipa3_hw_flags
*
* Returns: 0 on success, negative on failure
*/
int ipa3_uc_update_hw_flags(u32 flags)
{
union IpaHwUpdateFlagsCmdData_t cmd;
memset(&cmd, 0, sizeof(cmd));
cmd.params.newFlags = flags;
return ipa3_uc_send_cmd(cmd.raw32b, IPA_CPU_2_HW_CMD_UPDATE_FLAGS, 0,
false, HZ);
}
/**
* ipa3_uc_rg10_write_reg() - write to register possibly via uC
*
* if the RG10 limitation workaround is enabled, then writing
* to a register will be proxied by the uC due to H/W limitation.
* This func should be called for RG10 registers only
*
* @Parameters: Like ipahal_write_reg_n() parameters
*
*/
void ipa3_uc_rg10_write_reg(enum ipahal_reg_name reg, u32 n, u32 val)
{
int ret;
u32 paddr;
if (!ipa3_ctx->apply_rg10_wa)
return ipahal_write_reg_n(reg, n, val);
/* calculate register physical address */
paddr = ipa3_ctx->ipa_wrapper_base + ipa3_ctx->ctrl->ipa_reg_base_ofst;
paddr += ipahal_get_reg_n_ofst(reg, n);
IPADBG("Sending uC cmd to reg write: addr=0x%x val=0x%x\n",
paddr, val);
ret = ipa3_uc_send_cmd_64b_param(paddr, val,
IPA_CPU_2_HW_CMD_REG_WRITE, 0, true, 0);
if (ret) {
IPAERR("failed to send cmd to uC for reg write\n");
BUG();
}
}
/**
* ipa3_uc_memcpy() - Perform a memcpy action using IPA uC
* @dest: physical address to store the copied data.
* @src: physical address of the source data to copy.
* @len: number of bytes to copy.
*
* Returns: 0 on success, negative on failure
*/
int ipa3_uc_memcpy(phys_addr_t dest, phys_addr_t src, int len)
{
int res;
struct ipa_mem_buffer mem;
struct IpaHwMemCopyData_t *cmd;
IPADBG("dest 0x%pa src 0x%pa len %d\n", &dest, &src, len);
mem.size = sizeof(cmd);
mem.base = dma_alloc_coherent(ipa3_ctx->pdev, mem.size, &mem.phys_base,
GFP_KERNEL);
if (!mem.base) {
IPAERR("fail to alloc DMA buff of size %d\n", mem.size);
return -ENOMEM;
}
cmd = (struct IpaHwMemCopyData_t *)mem.base;
memset(cmd, 0, sizeof(*cmd));
cmd->destination_addr = dest;
cmd->dest_buffer_size = len;
cmd->source_addr = src;
cmd->source_buffer_size = len;
res = ipa3_uc_send_cmd((u32)mem.phys_base, IPA_CPU_2_HW_CMD_MEMCPY, 0,
true, 10 * HZ);
if (res) {
IPAERR("ipa3_uc_send_cmd failed %d\n", res);
goto free_coherent;
}
res = 0;
free_coherent:
dma_free_coherent(ipa3_ctx->pdev, mem.size, mem.base, mem.phys_base);
return res;
}

View File

@@ -0,0 +1,962 @@
/* Copyright (c) 2015, 2016 The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/ipa.h>
#include "ipa_i.h"
/* MHI uC interface definitions */
#define IPA_HW_INTERFACE_MHI_VERSION 0x0004
#define IPA_HW_MAX_NUMBER_OF_CHANNELS 2
#define IPA_HW_MAX_NUMBER_OF_EVENTRINGS 2
#define IPA_HW_MAX_CHANNEL_HANDLE (IPA_HW_MAX_NUMBER_OF_CHANNELS-1)
/**
* Values that represent the MHI commands from CPU to IPA HW.
* @IPA_CPU_2_HW_CMD_MHI_INIT: Initialize HW to be ready for MHI processing.
* Once operation was completed HW shall respond with
* IPA_HW_2_CPU_RESPONSE_CMD_COMPLETED.
* @IPA_CPU_2_HW_CMD_MHI_INIT_CHANNEL: Initialize specific channel to be ready
* to serve MHI transfers. Once initialization was completed HW shall
* respond with IPA_HW_2_CPU_RESPONSE_MHI_CHANGE_CHANNEL_STATE.
* IPA_HW_MHI_CHANNEL_STATE_ENABLE
* @IPA_CPU_2_HW_CMD_MHI_UPDATE_MSI: Update MHI MSI interrupts data.
* Once operation was completed HW shall respond with
* IPA_HW_2_CPU_RESPONSE_CMD_COMPLETED.
* @IPA_CPU_2_HW_CMD_MHI_CHANGE_CHANNEL_STATE: Change specific channel
* processing state following host request. Once operation was completed
* HW shall respond with IPA_HW_2_CPU_RESPONSE_MHI_CHANGE_CHANNEL_STATE.
* @IPA_CPU_2_HW_CMD_MHI_DL_UL_SYNC_INFO: Info related to DL UL syncronization.
* @IPA_CPU_2_HW_CMD_MHI_STOP_EVENT_UPDATE: Cmd to stop event ring processing.
*/
enum ipa_cpu_2_hw_mhi_commands {
IPA_CPU_2_HW_CMD_MHI_INIT
= FEATURE_ENUM_VAL(IPA_HW_FEATURE_MHI, 0),
IPA_CPU_2_HW_CMD_MHI_INIT_CHANNEL
= FEATURE_ENUM_VAL(IPA_HW_FEATURE_MHI, 1),
IPA_CPU_2_HW_CMD_MHI_UPDATE_MSI
= FEATURE_ENUM_VAL(IPA_HW_FEATURE_MHI, 2),
IPA_CPU_2_HW_CMD_MHI_CHANGE_CHANNEL_STATE
= FEATURE_ENUM_VAL(IPA_HW_FEATURE_MHI, 3),
IPA_CPU_2_HW_CMD_MHI_DL_UL_SYNC_INFO
= FEATURE_ENUM_VAL(IPA_HW_FEATURE_MHI, 4),
IPA_CPU_2_HW_CMD_MHI_STOP_EVENT_UPDATE
= FEATURE_ENUM_VAL(IPA_HW_FEATURE_MHI, 5)
};
/**
* Values that represent MHI related HW responses to CPU commands.
* @IPA_HW_2_CPU_RESPONSE_MHI_CHANGE_CHANNEL_STATE: Response to
* IPA_CPU_2_HW_CMD_MHI_INIT_CHANNEL or
* IPA_CPU_2_HW_CMD_MHI_CHANGE_CHANNEL_STATE commands.
*/
enum ipa_hw_2_cpu_mhi_responses {
IPA_HW_2_CPU_RESPONSE_MHI_CHANGE_CHANNEL_STATE
= FEATURE_ENUM_VAL(IPA_HW_FEATURE_MHI, 0),
};
/**
* Values that represent MHI related HW event to be sent to CPU.
* @IPA_HW_2_CPU_EVENT_MHI_CHANNEL_ERROR: Event specify the device detected an
* error in an element from the transfer ring associated with the channel
* @IPA_HW_2_CPU_EVENT_MHI_CHANNEL_WAKE_UP_REQUEST: Event specify a bam
* interrupt was asserted when MHI engine is suspended
*/
enum ipa_hw_2_cpu_mhi_events {
IPA_HW_2_CPU_EVENT_MHI_CHANNEL_ERROR
= FEATURE_ENUM_VAL(IPA_HW_FEATURE_MHI, 0),
IPA_HW_2_CPU_EVENT_MHI_CHANNEL_WAKE_UP_REQUEST
= FEATURE_ENUM_VAL(IPA_HW_FEATURE_MHI, 1),
};
/**
* Channel error types.
* @IPA_HW_CHANNEL_ERROR_NONE: No error persists.
* @IPA_HW_CHANNEL_INVALID_RE_ERROR: Invalid Ring Element was detected
*/
enum ipa_hw_channel_errors {
IPA_HW_CHANNEL_ERROR_NONE,
IPA_HW_CHANNEL_INVALID_RE_ERROR
};
/**
* MHI error types.
* @IPA_HW_INVALID_MMIO_ERROR: Invalid data read from MMIO space
* @IPA_HW_INVALID_CHANNEL_ERROR: Invalid data read from channel context array
* @IPA_HW_INVALID_EVENT_ERROR: Invalid data read from event ring context array
* @IPA_HW_NO_ED_IN_RING_ERROR: No event descriptors are available to report on
* secondary event ring
* @IPA_HW_LINK_ERROR: Link error
*/
enum ipa_hw_mhi_errors {
IPA_HW_INVALID_MMIO_ERROR
= FEATURE_ENUM_VAL(IPA_HW_FEATURE_MHI, 0),
IPA_HW_INVALID_CHANNEL_ERROR
= FEATURE_ENUM_VAL(IPA_HW_FEATURE_MHI, 1),
IPA_HW_INVALID_EVENT_ERROR
= FEATURE_ENUM_VAL(IPA_HW_FEATURE_MHI, 2),
IPA_HW_NO_ED_IN_RING_ERROR
= FEATURE_ENUM_VAL(IPA_HW_FEATURE_MHI, 4),
IPA_HW_LINK_ERROR
= FEATURE_ENUM_VAL(IPA_HW_FEATURE_MHI, 5),
};
/**
* Structure referring to the common and MHI section of 128B shared memory
* located in offset zero of SW Partition in IPA SRAM.
* The shared memory is used for communication between IPA HW and CPU.
* @common: common section in IPA SRAM
* @interfaceVersionMhi: The MHI interface version as reported by HW
* @mhiState: Overall MHI state
* @reserved_2B: reserved
* @mhiCnl0State: State of MHI channel 0.
* The state carries information regarding the error type.
* See IPA_HW_MHI_CHANNEL_STATES.
* @mhiCnl0State: State of MHI channel 1.
* @mhiCnl0State: State of MHI channel 2.
* @mhiCnl0State: State of MHI channel 3
* @mhiCnl0State: State of MHI channel 4.
* @mhiCnl0State: State of MHI channel 5.
* @mhiCnl0State: State of MHI channel 6.
* @mhiCnl0State: State of MHI channel 7.
* @reserved_37_34: reserved
* @reserved_3B_38: reserved
* @reserved_3F_3C: reserved
*/
struct IpaHwSharedMemMhiMapping_t {
struct IpaHwSharedMemCommonMapping_t common;
u16 interfaceVersionMhi;
u8 mhiState;
u8 reserved_2B;
u8 mhiCnl0State;
u8 mhiCnl1State;
u8 mhiCnl2State;
u8 mhiCnl3State;
u8 mhiCnl4State;
u8 mhiCnl5State;
u8 mhiCnl6State;
u8 mhiCnl7State;
u32 reserved_37_34;
u32 reserved_3B_38;
u32 reserved_3F_3C;
};
/**
* Structure holding the parameters for IPA_CPU_2_HW_CMD_MHI_INIT command.
* Parameters are sent as pointer thus should be reside in address accessible
* to HW.
* @msiAddress: The MSI base (in device space) used for asserting the interrupt
* (MSI) associated with the event ring
* mmioBaseAddress: The address (in device space) of MMIO structure in
* host space
* deviceMhiCtrlBaseAddress: Base address of the memory region in the device
* address space where the MHI control data structures are allocated by
* the host, including channel context array, event context array,
* and rings. This value is used for host/device address translation.
* deviceMhiDataBaseAddress: Base address of the memory region in the device
* address space where the MHI data buffers are allocated by the host.
* This value is used for host/device address translation.
* firstChannelIndex: First channel ID. Doorbell 0 is mapped to this channel
* firstEventRingIndex: First event ring ID. Doorbell 16 is mapped to this
* event ring.
*/
struct IpaHwMhiInitCmdData_t {
u32 msiAddress;
u32 mmioBaseAddress;
u32 deviceMhiCtrlBaseAddress;
u32 deviceMhiDataBaseAddress;
u32 firstChannelIndex;
u32 firstEventRingIndex;
};
/**
* Structure holding the parameters for IPA_CPU_2_HW_CMD_MHI_INIT_CHANNEL
* command. Parameters are sent as 32b immediate parameters.
* @hannelHandle: The channel identifier as allocated by driver.
* value is within the range 0 to IPA_HW_MAX_CHANNEL_HANDLE
* @contexArrayIndex: Unique index for channels, between 0 and 255. The index is
* used as an index in channel context array structures.
* @bamPipeId: The BAM pipe number for pipe dedicated for this channel
* @channelDirection: The direction of the channel as defined in the channel
* type field (CHTYPE) in the channel context data structure.
* @reserved: reserved.
*/
union IpaHwMhiInitChannelCmdData_t {
struct IpaHwMhiInitChannelCmdParams_t {
u32 channelHandle:8;
u32 contexArrayIndex:8;
u32 bamPipeId:6;
u32 channelDirection:2;
u32 reserved:8;
} params;
u32 raw32b;
};
/**
* Structure holding the parameters for IPA_CPU_2_HW_CMD_MHI_UPDATE_MSI command.
* @msiAddress_low: The MSI lower base addr (in device space) used for asserting
* the interrupt (MSI) associated with the event ring.
* @msiAddress_hi: The MSI higher base addr (in device space) used for asserting
* the interrupt (MSI) associated with the event ring.
* @msiMask: Mask indicating number of messages assigned by the host to device
* @msiData: Data Pattern to use when generating the MSI
*/
struct IpaHwMhiMsiCmdData_t {
u32 msiAddress_low;
u32 msiAddress_hi;
u32 msiMask;
u32 msiData;
};
/**
* Structure holding the parameters for
* IPA_CPU_2_HW_CMD_MHI_CHANGE_CHANNEL_STATE command.
* Parameters are sent as 32b immediate parameters.
* @requestedState: The requested channel state as was indicated from Host.
* Use IPA_HW_MHI_CHANNEL_STATES to specify the requested state
* @channelHandle: The channel identifier as allocated by driver.
* value is within the range 0 to IPA_HW_MAX_CHANNEL_HANDLE
* @LPTransitionRejected: Indication that low power state transition was
* rejected
* @reserved: reserved
*/
union IpaHwMhiChangeChannelStateCmdData_t {
struct IpaHwMhiChangeChannelStateCmdParams_t {
u32 requestedState:8;
u32 channelHandle:8;
u32 LPTransitionRejected:8;
u32 reserved:8;
} params;
u32 raw32b;
};
/**
* Structure holding the parameters for
* IPA_CPU_2_HW_CMD_MHI_STOP_EVENT_UPDATE command.
* Parameters are sent as 32b immediate parameters.
* @channelHandle: The channel identifier as allocated by driver.
* value is within the range 0 to IPA_HW_MAX_CHANNEL_HANDLE
* @reserved: reserved
*/
union IpaHwMhiStopEventUpdateData_t {
struct IpaHwMhiStopEventUpdateDataParams_t {
u32 channelHandle:8;
u32 reserved:24;
} params;
u32 raw32b;
};
/**
* Structure holding the parameters for
* IPA_HW_2_CPU_RESPONSE_MHI_CHANGE_CHANNEL_STATE response.
* Parameters are sent as 32b immediate parameters.
* @state: The new channel state. In case state is not as requested this is
* error indication for the last command
* @channelHandle: The channel identifier
* @additonalParams: For stop: the number of pending bam descriptors currently
* queued
*/
union IpaHwMhiChangeChannelStateResponseData_t {
struct IpaHwMhiChangeChannelStateResponseParams_t {
u32 state:8;
u32 channelHandle:8;
u32 additonalParams:16;
} params;
u32 raw32b;
};
/**
* Structure holding the parameters for
* IPA_HW_2_CPU_EVENT_MHI_CHANNEL_ERROR event.
* Parameters are sent as 32b immediate parameters.
* @errorType: Type of error - IPA_HW_CHANNEL_ERRORS
* @channelHandle: The channel identifier as allocated by driver.
* value is within the range 0 to IPA_HW_MAX_CHANNEL_HANDLE
* @reserved: reserved
*/
union IpaHwMhiChannelErrorEventData_t {
struct IpaHwMhiChannelErrorEventParams_t {
u32 errorType:8;
u32 channelHandle:8;
u32 reserved:16;
} params;
u32 raw32b;
};
/**
* Structure holding the parameters for
* IPA_HW_2_CPU_EVENT_MHI_CHANNEL_WAKE_UP_REQUEST event.
* Parameters are sent as 32b immediate parameters.
* @channelHandle: The channel identifier as allocated by driver.
* value is within the range 0 to IPA_HW_MAX_CHANNEL_HANDLE
* @reserved: reserved
*/
union IpaHwMhiChannelWakeupEventData_t {
struct IpaHwMhiChannelWakeupEventParams_t {
u32 channelHandle:8;
u32 reserved:24;
} params;
u32 raw32b;
};
/**
* Structure holding the MHI Common statistics
* @numULDLSync: Number of times UL activity trigged due to DL activity
* @numULTimerExpired: Number of times UL Accm Timer expired
*/
struct IpaHwStatsMhiCmnInfoData_t {
u32 numULDLSync;
u32 numULTimerExpired;
u32 numChEvCtxWpRead;
u32 reserved;
};
/**
* Structure holding the MHI Channel statistics
* @doorbellInt: The number of doorbell int
* @reProccesed: The number of ring elements processed
* @bamFifoFull: Number of times Bam Fifo got full
* @bamFifoEmpty: Number of times Bam Fifo got empty
* @bamFifoUsageHigh: Number of times Bam fifo usage went above 75%
* @bamFifoUsageLow: Number of times Bam fifo usage went below 25%
* @bamInt: Number of BAM Interrupts
* @ringFull: Number of times Transfer Ring got full
* @ringEmpty: umber of times Transfer Ring got empty
* @ringUsageHigh: Number of times Transfer Ring usage went above 75%
* @ringUsageLow: Number of times Transfer Ring usage went below 25%
* @delayedMsi: Number of times device triggered MSI to host after
* Interrupt Moderation Timer expiry
* @immediateMsi: Number of times device triggered MSI to host immediately
* @thresholdMsi: Number of times device triggered MSI due to max pending
* events threshold reached
* @numSuspend: Number of times channel was suspended
* @numResume: Number of times channel was suspended
* @num_OOB: Number of times we indicated that we are OOB
* @num_OOB_timer_expiry: Number of times we indicated that we are OOB
* after timer expiry
* @num_OOB_moderation_timer_start: Number of times we started timer after
* sending OOB and hitting OOB again before we processed threshold
* number of packets
* @num_db_mode_evt: Number of times we indicated that we are in Doorbell mode
*/
struct IpaHwStatsMhiCnlInfoData_t {
u32 doorbellInt;
u32 reProccesed;
u32 bamFifoFull;
u32 bamFifoEmpty;
u32 bamFifoUsageHigh;
u32 bamFifoUsageLow;
u32 bamInt;
u32 ringFull;
u32 ringEmpty;
u32 ringUsageHigh;
u32 ringUsageLow;
u32 delayedMsi;
u32 immediateMsi;
u32 thresholdMsi;
u32 numSuspend;
u32 numResume;
u32 num_OOB;
u32 num_OOB_timer_expiry;
u32 num_OOB_moderation_timer_start;
u32 num_db_mode_evt;
};
/**
* Structure holding the MHI statistics
* @mhiCmnStats: Stats pertaining to MHI
* @mhiCnlStats: Stats pertaining to each channel
*/
struct IpaHwStatsMhiInfoData_t {
struct IpaHwStatsMhiCmnInfoData_t mhiCmnStats;
struct IpaHwStatsMhiCnlInfoData_t mhiCnlStats[
IPA_HW_MAX_NUMBER_OF_CHANNELS];
};
/**
* Structure holding the MHI Common Config info
* @isDlUlSyncEnabled: Flag to indicate if DL-UL synchronization is enabled
* @UlAccmVal: Out Channel(UL) accumulation time in ms when DL UL Sync is
* enabled
* @ulMsiEventThreshold: Threshold at which HW fires MSI to host for UL events
* @dlMsiEventThreshold: Threshold at which HW fires MSI to host for DL events
*/
struct IpaHwConfigMhiCmnInfoData_t {
u8 isDlUlSyncEnabled;
u8 UlAccmVal;
u8 ulMsiEventThreshold;
u8 dlMsiEventThreshold;
};
/**
* Structure holding the parameters for MSI info data
* @msiAddress_low: The MSI lower base addr (in device space) used for asserting
* the interrupt (MSI) associated with the event ring.
* @msiAddress_hi: The MSI higher base addr (in device space) used for asserting
* the interrupt (MSI) associated with the event ring.
* @msiMask: Mask indicating number of messages assigned by the host to device
* @msiData: Data Pattern to use when generating the MSI
*/
struct IpaHwConfigMhiMsiInfoData_t {
u32 msiAddress_low;
u32 msiAddress_hi;
u32 msiMask;
u32 msiData;
};
/**
* Structure holding the MHI Channel Config info
* @transferRingSize: The Transfer Ring size in terms of Ring Elements
* @transferRingIndex: The Transfer Ring channel number as defined by host
* @eventRingIndex: The Event Ring Index associated with this Transfer Ring
* @bamPipeIndex: The BAM Pipe associated with this channel
* @isOutChannel: Indication for the direction of channel
* @reserved_0: Reserved byte for maintaining 4byte alignment
* @reserved_1: Reserved byte for maintaining 4byte alignment
*/
struct IpaHwConfigMhiCnlInfoData_t {
u16 transferRingSize;
u8 transferRingIndex;
u8 eventRingIndex;
u8 bamPipeIndex;
u8 isOutChannel;
u8 reserved_0;
u8 reserved_1;
};
/**
* Structure holding the MHI Event Config info
* @msiVec: msi vector to invoke MSI interrupt
* @intmodtValue: Interrupt moderation timer (in milliseconds)
* @eventRingSize: The Event Ring size in terms of Ring Elements
* @eventRingIndex: The Event Ring number as defined by host
* @reserved_0: Reserved byte for maintaining 4byte alignment
* @reserved_1: Reserved byte for maintaining 4byte alignment
* @reserved_2: Reserved byte for maintaining 4byte alignment
*/
struct IpaHwConfigMhiEventInfoData_t {
u32 msiVec;
u16 intmodtValue;
u16 eventRingSize;
u8 eventRingIndex;
u8 reserved_0;
u8 reserved_1;
u8 reserved_2;
};
/**
* Structure holding the MHI Config info
* @mhiCmnCfg: Common Config pertaining to MHI
* @mhiMsiCfg: Config pertaining to MSI config
* @mhiCnlCfg: Config pertaining to each channel
* @mhiEvtCfg: Config pertaining to each event Ring
*/
struct IpaHwConfigMhiInfoData_t {
struct IpaHwConfigMhiCmnInfoData_t mhiCmnCfg;
struct IpaHwConfigMhiMsiInfoData_t mhiMsiCfg;
struct IpaHwConfigMhiCnlInfoData_t mhiCnlCfg[
IPA_HW_MAX_NUMBER_OF_CHANNELS];
struct IpaHwConfigMhiEventInfoData_t mhiEvtCfg[
IPA_HW_MAX_NUMBER_OF_EVENTRINGS];
};
struct ipa3_uc_mhi_ctx {
u8 expected_responseOp;
u32 expected_responseParams;
void (*ready_cb)(void);
void (*wakeup_request_cb)(void);
u32 mhi_uc_stats_ofst;
struct IpaHwStatsMhiInfoData_t *mhi_uc_stats_mmio;
};
#define PRINT_COMMON_STATS(x) \
(nBytes += scnprintf(&dbg_buff[nBytes], size - nBytes, \
#x "=0x%x\n", ipa3_uc_mhi_ctx->mhi_uc_stats_mmio->mhiCmnStats.x))
#define PRINT_CHANNEL_STATS(ch, x) \
(nBytes += scnprintf(&dbg_buff[nBytes], size - nBytes, \
#x "=0x%x\n", ipa3_uc_mhi_ctx->mhi_uc_stats_mmio->mhiCnlStats[ch].x))
struct ipa3_uc_mhi_ctx *ipa3_uc_mhi_ctx;
static int ipa3_uc_mhi_response_hdlr(struct IpaHwSharedMemCommonMapping_t
*uc_sram_mmio, u32 *uc_status)
{
IPADBG("responseOp=%d\n", uc_sram_mmio->responseOp);
if (uc_sram_mmio->responseOp == ipa3_uc_mhi_ctx->expected_responseOp &&
uc_sram_mmio->responseParams ==
ipa3_uc_mhi_ctx->expected_responseParams) {
*uc_status = 0;
return 0;
}
return -EINVAL;
}
static void ipa3_uc_mhi_event_hdlr(struct IpaHwSharedMemCommonMapping_t
*uc_sram_mmio)
{
if (ipa3_ctx->uc_ctx.uc_sram_mmio->eventOp ==
IPA_HW_2_CPU_EVENT_MHI_CHANNEL_ERROR) {
union IpaHwMhiChannelErrorEventData_t evt;
IPAERR("Channel error\n");
evt.raw32b = uc_sram_mmio->eventParams;
IPAERR("errorType=%d channelHandle=%d reserved=%d\n",
evt.params.errorType, evt.params.channelHandle,
evt.params.reserved);
} else if (ipa3_ctx->uc_ctx.uc_sram_mmio->eventOp ==
IPA_HW_2_CPU_EVENT_MHI_CHANNEL_WAKE_UP_REQUEST) {
union IpaHwMhiChannelWakeupEventData_t evt;
IPADBG("WakeUp channel request\n");
evt.raw32b = uc_sram_mmio->eventParams;
IPADBG("channelHandle=%d reserved=%d\n",
evt.params.channelHandle, evt.params.reserved);
ipa3_uc_mhi_ctx->wakeup_request_cb();
}
}
static void ipa3_uc_mhi_event_log_info_hdlr(
struct IpaHwEventLogInfoData_t *uc_event_top_mmio)
{
if ((uc_event_top_mmio->featureMask & (1 << IPA_HW_FEATURE_MHI)) == 0) {
IPAERR("MHI feature missing 0x%x\n",
uc_event_top_mmio->featureMask);
return;
}
if (uc_event_top_mmio->statsInfo.featureInfo[IPA_HW_FEATURE_MHI].
params.size != sizeof(struct IpaHwStatsMhiInfoData_t)) {
IPAERR("mhi stats sz invalid exp=%zu is=%u\n",
sizeof(struct IpaHwStatsMhiInfoData_t),
uc_event_top_mmio->statsInfo.
featureInfo[IPA_HW_FEATURE_MHI].params.size);
return;
}
ipa3_uc_mhi_ctx->mhi_uc_stats_ofst = uc_event_top_mmio->
statsInfo.baseAddrOffset + uc_event_top_mmio->statsInfo.
featureInfo[IPA_HW_FEATURE_MHI].params.offset;
IPAERR("MHI stats ofst=0x%x\n", ipa3_uc_mhi_ctx->mhi_uc_stats_ofst);
if (ipa3_uc_mhi_ctx->mhi_uc_stats_ofst +
sizeof(struct IpaHwStatsMhiInfoData_t) >=
ipa3_ctx->ctrl->ipa_reg_base_ofst +
ipahal_get_reg_n_ofst(IPA_SRAM_DIRECT_ACCESS_n, 0) +
ipa3_ctx->smem_sz) {
IPAERR("uc_mhi_stats 0x%x outside SRAM\n",
ipa3_uc_mhi_ctx->mhi_uc_stats_ofst);
return;
}
ipa3_uc_mhi_ctx->mhi_uc_stats_mmio =
ioremap(ipa3_ctx->ipa_wrapper_base +
ipa3_uc_mhi_ctx->mhi_uc_stats_ofst,
sizeof(struct IpaHwStatsMhiInfoData_t));
if (!ipa3_uc_mhi_ctx->mhi_uc_stats_mmio) {
IPAERR("fail to ioremap uc mhi stats\n");
return;
}
}
int ipa3_uc_mhi_init(void (*ready_cb)(void), void (*wakeup_request_cb)(void))
{
struct ipa3_uc_hdlrs hdlrs;
if (ipa3_uc_mhi_ctx) {
IPAERR("Already initialized\n");
return -EFAULT;
}
ipa3_uc_mhi_ctx = kzalloc(sizeof(*ipa3_uc_mhi_ctx), GFP_KERNEL);
if (!ipa3_uc_mhi_ctx) {
IPAERR("no mem\n");
return -ENOMEM;
}
ipa3_uc_mhi_ctx->ready_cb = ready_cb;
ipa3_uc_mhi_ctx->wakeup_request_cb = wakeup_request_cb;
memset(&hdlrs, 0, sizeof(hdlrs));
hdlrs.ipa_uc_loaded_hdlr = ipa3_uc_mhi_ctx->ready_cb;
hdlrs.ipa3_uc_response_hdlr = ipa3_uc_mhi_response_hdlr;
hdlrs.ipa_uc_event_hdlr = ipa3_uc_mhi_event_hdlr;
hdlrs.ipa_uc_event_log_info_hdlr = ipa3_uc_mhi_event_log_info_hdlr;
ipa3_uc_register_handlers(IPA_HW_FEATURE_MHI, &hdlrs);
IPADBG("Done\n");
return 0;
}
void ipa3_uc_mhi_cleanup(void)
{
struct ipa3_uc_hdlrs null_hdlrs = { 0 };
IPADBG("Enter\n");
if (!ipa3_uc_mhi_ctx) {
IPAERR("ipa3_uc_mhi_ctx is not initialized\n");
return;
}
ipa3_uc_register_handlers(IPA_HW_FEATURE_MHI, &null_hdlrs);
kfree(ipa3_uc_mhi_ctx);
ipa3_uc_mhi_ctx = NULL;
IPADBG("Done\n");
}
int ipa3_uc_mhi_init_engine(struct ipa_mhi_msi_info *msi, u32 mmio_addr,
u32 host_ctrl_addr, u32 host_data_addr, u32 first_ch_idx,
u32 first_evt_idx)
{
int res;
struct ipa_mem_buffer mem;
struct IpaHwMhiInitCmdData_t *init_cmd_data;
struct IpaHwMhiMsiCmdData_t *msi_cmd;
if (!ipa3_uc_mhi_ctx) {
IPAERR("Not initialized\n");
return -EFAULT;
}
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
res = ipa3_uc_update_hw_flags(0);
if (res) {
IPAERR("ipa3_uc_update_hw_flags failed %d\n", res);
goto disable_clks;
}
mem.size = sizeof(*init_cmd_data);
mem.base = dma_alloc_coherent(ipa3_ctx->pdev, mem.size, &mem.phys_base,
GFP_KERNEL);
if (!mem.base) {
IPAERR("fail to alloc DMA buff of size %d\n", mem.size);
res = -ENOMEM;
goto disable_clks;
}
memset(mem.base, 0, mem.size);
init_cmd_data = (struct IpaHwMhiInitCmdData_t *)mem.base;
init_cmd_data->msiAddress = msi->addr_low;
init_cmd_data->mmioBaseAddress = mmio_addr;
init_cmd_data->deviceMhiCtrlBaseAddress = host_ctrl_addr;
init_cmd_data->deviceMhiDataBaseAddress = host_data_addr;
init_cmd_data->firstChannelIndex = first_ch_idx;
init_cmd_data->firstEventRingIndex = first_evt_idx;
res = ipa3_uc_send_cmd((u32)mem.phys_base, IPA_CPU_2_HW_CMD_MHI_INIT, 0,
false, HZ);
if (res) {
IPAERR("ipa3_uc_send_cmd failed %d\n", res);
dma_free_coherent(ipa3_ctx->pdev, mem.size, mem.base,
mem.phys_base);
goto disable_clks;
}
dma_free_coherent(ipa3_ctx->pdev, mem.size, mem.base, mem.phys_base);
mem.size = sizeof(*msi_cmd);
mem.base = dma_alloc_coherent(ipa3_ctx->pdev, mem.size, &mem.phys_base,
GFP_KERNEL);
if (!mem.base) {
IPAERR("fail to alloc DMA buff of size %d\n", mem.size);
res = -ENOMEM;
goto disable_clks;
}
msi_cmd = (struct IpaHwMhiMsiCmdData_t *)mem.base;
msi_cmd->msiAddress_hi = msi->addr_hi;
msi_cmd->msiAddress_low = msi->addr_low;
msi_cmd->msiData = msi->data;
msi_cmd->msiMask = msi->mask;
res = ipa3_uc_send_cmd((u32)mem.phys_base,
IPA_CPU_2_HW_CMD_MHI_UPDATE_MSI, 0, false, HZ);
if (res) {
IPAERR("ipa3_uc_send_cmd failed %d\n", res);
dma_free_coherent(ipa3_ctx->pdev, mem.size, mem.base,
mem.phys_base);
goto disable_clks;
}
dma_free_coherent(ipa3_ctx->pdev, mem.size, mem.base, mem.phys_base);
res = 0;
disable_clks:
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
return res;
}
int ipa3_uc_mhi_init_channel(int ipa_ep_idx, int channelHandle,
int contexArrayIndex, int channelDirection)
{
int res;
union IpaHwMhiInitChannelCmdData_t init_cmd;
union IpaHwMhiChangeChannelStateResponseData_t uc_rsp;
if (!ipa3_uc_mhi_ctx) {
IPAERR("Not initialized\n");
return -EFAULT;
}
if (ipa_ep_idx < 0 || ipa_ep_idx >= ipa3_ctx->ipa_num_pipes) {
IPAERR("Invalid ipa_ep_idx.\n");
return -EINVAL;
}
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
memset(&uc_rsp, 0, sizeof(uc_rsp));
uc_rsp.params.state = IPA_HW_MHI_CHANNEL_STATE_RUN;
uc_rsp.params.channelHandle = channelHandle;
ipa3_uc_mhi_ctx->expected_responseOp =
IPA_HW_2_CPU_RESPONSE_MHI_CHANGE_CHANNEL_STATE;
ipa3_uc_mhi_ctx->expected_responseParams = uc_rsp.raw32b;
memset(&init_cmd, 0, sizeof(init_cmd));
init_cmd.params.channelHandle = channelHandle;
init_cmd.params.contexArrayIndex = contexArrayIndex;
init_cmd.params.bamPipeId = ipa_ep_idx;
init_cmd.params.channelDirection = channelDirection;
res = ipa3_uc_send_cmd(init_cmd.raw32b,
IPA_CPU_2_HW_CMD_MHI_INIT_CHANNEL, 0, false, HZ);
if (res) {
IPAERR("ipa3_uc_send_cmd failed %d\n", res);
goto disable_clks;
}
res = 0;
disable_clks:
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
return res;
}
int ipa3_uc_mhi_reset_channel(int channelHandle)
{
union IpaHwMhiChangeChannelStateCmdData_t cmd;
union IpaHwMhiChangeChannelStateResponseData_t uc_rsp;
int res;
if (!ipa3_uc_mhi_ctx) {
IPAERR("Not initialized\n");
return -EFAULT;
}
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
memset(&uc_rsp, 0, sizeof(uc_rsp));
uc_rsp.params.state = IPA_HW_MHI_CHANNEL_STATE_DISABLE;
uc_rsp.params.channelHandle = channelHandle;
ipa3_uc_mhi_ctx->expected_responseOp =
IPA_HW_2_CPU_RESPONSE_MHI_CHANGE_CHANNEL_STATE;
ipa3_uc_mhi_ctx->expected_responseParams = uc_rsp.raw32b;
memset(&cmd, 0, sizeof(cmd));
cmd.params.requestedState = IPA_HW_MHI_CHANNEL_STATE_DISABLE;
cmd.params.channelHandle = channelHandle;
res = ipa3_uc_send_cmd(cmd.raw32b,
IPA_CPU_2_HW_CMD_MHI_CHANGE_CHANNEL_STATE, 0, false, HZ);
if (res) {
IPAERR("ipa3_uc_send_cmd failed %d\n", res);
goto disable_clks;
}
res = 0;
disable_clks:
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
return res;
}
int ipa3_uc_mhi_suspend_channel(int channelHandle)
{
union IpaHwMhiChangeChannelStateCmdData_t cmd;
union IpaHwMhiChangeChannelStateResponseData_t uc_rsp;
int res;
if (!ipa3_uc_mhi_ctx) {
IPAERR("Not initialized\n");
return -EFAULT;
}
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
memset(&uc_rsp, 0, sizeof(uc_rsp));
uc_rsp.params.state = IPA_HW_MHI_CHANNEL_STATE_SUSPEND;
uc_rsp.params.channelHandle = channelHandle;
ipa3_uc_mhi_ctx->expected_responseOp =
IPA_HW_2_CPU_RESPONSE_MHI_CHANGE_CHANNEL_STATE;
ipa3_uc_mhi_ctx->expected_responseParams = uc_rsp.raw32b;
memset(&cmd, 0, sizeof(cmd));
cmd.params.requestedState = IPA_HW_MHI_CHANNEL_STATE_SUSPEND;
cmd.params.channelHandle = channelHandle;
res = ipa3_uc_send_cmd(cmd.raw32b,
IPA_CPU_2_HW_CMD_MHI_CHANGE_CHANNEL_STATE, 0, false, HZ);
if (res) {
IPAERR("ipa3_uc_send_cmd failed %d\n", res);
goto disable_clks;
}
res = 0;
disable_clks:
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
return res;
}
int ipa3_uc_mhi_resume_channel(int channelHandle, bool LPTransitionRejected)
{
union IpaHwMhiChangeChannelStateCmdData_t cmd;
union IpaHwMhiChangeChannelStateResponseData_t uc_rsp;
int res;
if (!ipa3_uc_mhi_ctx) {
IPAERR("Not initialized\n");
return -EFAULT;
}
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
memset(&uc_rsp, 0, sizeof(uc_rsp));
uc_rsp.params.state = IPA_HW_MHI_CHANNEL_STATE_RUN;
uc_rsp.params.channelHandle = channelHandle;
ipa3_uc_mhi_ctx->expected_responseOp =
IPA_HW_2_CPU_RESPONSE_MHI_CHANGE_CHANNEL_STATE;
ipa3_uc_mhi_ctx->expected_responseParams = uc_rsp.raw32b;
memset(&cmd, 0, sizeof(cmd));
cmd.params.requestedState = IPA_HW_MHI_CHANNEL_STATE_RUN;
cmd.params.channelHandle = channelHandle;
cmd.params.LPTransitionRejected = LPTransitionRejected;
res = ipa3_uc_send_cmd(cmd.raw32b,
IPA_CPU_2_HW_CMD_MHI_CHANGE_CHANNEL_STATE, 0, false, HZ);
if (res) {
IPAERR("ipa3_uc_send_cmd failed %d\n", res);
goto disable_clks;
}
res = 0;
disable_clks:
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
return res;
}
int ipa3_uc_mhi_stop_event_update_channel(int channelHandle)
{
union IpaHwMhiStopEventUpdateData_t cmd;
int res;
if (!ipa3_uc_mhi_ctx) {
IPAERR("Not initialized\n");
return -EFAULT;
}
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
memset(&cmd, 0, sizeof(cmd));
cmd.params.channelHandle = channelHandle;
ipa3_uc_mhi_ctx->expected_responseOp =
IPA_CPU_2_HW_CMD_MHI_STOP_EVENT_UPDATE;
ipa3_uc_mhi_ctx->expected_responseParams = cmd.raw32b;
res = ipa3_uc_send_cmd(cmd.raw32b,
IPA_CPU_2_HW_CMD_MHI_STOP_EVENT_UPDATE, 0, false, HZ);
if (res) {
IPAERR("ipa3_uc_send_cmd failed %d\n", res);
goto disable_clks;
}
res = 0;
disable_clks:
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
return res;
}
int ipa3_uc_mhi_send_dl_ul_sync_info(union IpaHwMhiDlUlSyncCmdData_t *cmd)
{
int res;
if (!ipa3_uc_mhi_ctx) {
IPAERR("Not initialized\n");
return -EFAULT;
}
IPADBG("isDlUlSyncEnabled=0x%x UlAccmVal=0x%x\n",
cmd->params.isDlUlSyncEnabled, cmd->params.UlAccmVal);
IPADBG("ulMsiEventThreshold=0x%x dlMsiEventThreshold=0x%x\n",
cmd->params.ulMsiEventThreshold,
cmd->params.dlMsiEventThreshold);
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
res = ipa3_uc_send_cmd(cmd->raw32b,
IPA_CPU_2_HW_CMD_MHI_DL_UL_SYNC_INFO, 0, false, HZ);
if (res) {
IPAERR("ipa3_uc_send_cmd failed %d\n", res);
goto disable_clks;
}
res = 0;
disable_clks:
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
return res;
}
int ipa3_uc_mhi_print_stats(char *dbg_buff, int size)
{
int nBytes = 0;
int i;
if (!ipa3_uc_mhi_ctx->mhi_uc_stats_mmio) {
IPAERR("MHI uc stats is not valid\n");
return 0;
}
nBytes += scnprintf(&dbg_buff[nBytes], size - nBytes,
"Common Stats:\n");
PRINT_COMMON_STATS(numULDLSync);
PRINT_COMMON_STATS(numULTimerExpired);
PRINT_COMMON_STATS(numChEvCtxWpRead);
for (i = 0; i < IPA_HW_MAX_NUMBER_OF_CHANNELS; i++) {
nBytes += scnprintf(&dbg_buff[nBytes], size - nBytes,
"Channel %d Stats:\n", i);
PRINT_CHANNEL_STATS(i, doorbellInt);
PRINT_CHANNEL_STATS(i, reProccesed);
PRINT_CHANNEL_STATS(i, bamFifoFull);
PRINT_CHANNEL_STATS(i, bamFifoEmpty);
PRINT_CHANNEL_STATS(i, bamFifoUsageHigh);
PRINT_CHANNEL_STATS(i, bamFifoUsageLow);
PRINT_CHANNEL_STATS(i, bamInt);
PRINT_CHANNEL_STATS(i, ringFull);
PRINT_CHANNEL_STATS(i, ringEmpty);
PRINT_CHANNEL_STATS(i, ringUsageHigh);
PRINT_CHANNEL_STATS(i, ringUsageLow);
PRINT_CHANNEL_STATS(i, delayedMsi);
PRINT_CHANNEL_STATS(i, immediateMsi);
PRINT_CHANNEL_STATS(i, thresholdMsi);
PRINT_CHANNEL_STATS(i, numSuspend);
PRINT_CHANNEL_STATS(i, numResume);
PRINT_CHANNEL_STATS(i, num_OOB);
PRINT_CHANNEL_STATS(i, num_OOB_timer_expiry);
PRINT_CHANNEL_STATS(i, num_OOB_moderation_timer_start);
PRINT_CHANNEL_STATS(i, num_db_mode_evt);
}
return nBytes;
}

View File

@@ -0,0 +1,410 @@
/* Copyright (c) 2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include "ipa_i.h"
#define IPA_UC_NTN_DB_PA_TX 0x79620DC
#define IPA_UC_NTN_DB_PA_RX 0x79620D8
static void ipa3_uc_ntn_event_handler(struct IpaHwSharedMemCommonMapping_t
*uc_sram_mmio)
{
union Ipa3HwNTNErrorEventData_t ntn_evt;
if (uc_sram_mmio->eventOp ==
IPA_HW_2_CPU_EVENT_NTN_ERROR) {
ntn_evt.raw32b = uc_sram_mmio->eventParams;
IPADBG("uC NTN evt errType=%u pipe=%d cherrType=%u\n",
ntn_evt.params.ntn_error_type,
ntn_evt.params.ipa_pipe_number,
ntn_evt.params.ntn_ch_err_type);
}
}
static void ipa3_uc_ntn_event_log_info_handler(
struct IpaHwEventLogInfoData_t *uc_event_top_mmio)
{
if ((uc_event_top_mmio->featureMask & (1 << IPA_HW_FEATURE_NTN)) == 0) {
IPAERR("NTN feature missing 0x%x\n",
uc_event_top_mmio->featureMask);
return;
}
if (uc_event_top_mmio->statsInfo.featureInfo[IPA_HW_FEATURE_NTN].
params.size != sizeof(struct Ipa3HwStatsNTNInfoData_t)) {
IPAERR("NTN stats sz invalid exp=%zu is=%u\n",
sizeof(struct Ipa3HwStatsNTNInfoData_t),
uc_event_top_mmio->statsInfo.
featureInfo[IPA_HW_FEATURE_NTN].params.size);
return;
}
ipa3_ctx->uc_ntn_ctx.ntn_uc_stats_ofst = uc_event_top_mmio->
statsInfo.baseAddrOffset + uc_event_top_mmio->statsInfo.
featureInfo[IPA_HW_FEATURE_NTN].params.offset;
IPAERR("NTN stats ofst=0x%x\n", ipa3_ctx->uc_ntn_ctx.ntn_uc_stats_ofst);
if (ipa3_ctx->uc_ntn_ctx.ntn_uc_stats_ofst +
sizeof(struct Ipa3HwStatsNTNInfoData_t) >=
ipa3_ctx->ctrl->ipa_reg_base_ofst +
ipahal_get_reg_n_ofst(IPA_SRAM_DIRECT_ACCESS_n, 0) +
ipa3_ctx->smem_sz) {
IPAERR("uc_ntn_stats 0x%x outside SRAM\n",
ipa3_ctx->uc_ntn_ctx.ntn_uc_stats_ofst);
return;
}
ipa3_ctx->uc_ntn_ctx.ntn_uc_stats_mmio =
ioremap(ipa3_ctx->ipa_wrapper_base +
ipa3_ctx->uc_ntn_ctx.ntn_uc_stats_ofst,
sizeof(struct Ipa3HwStatsNTNInfoData_t));
if (!ipa3_ctx->uc_ntn_ctx.ntn_uc_stats_mmio) {
IPAERR("fail to ioremap uc ntn stats\n");
return;
}
}
/**
* ipa2_get_wdi_stats() - Query WDI statistics from uc
* @stats: [inout] stats blob from client populated by driver
*
* Returns: 0 on success, negative on failure
*
* @note Cannot be called from atomic context
*
*/
int ipa3_get_ntn_stats(struct Ipa3HwStatsNTNInfoData_t *stats)
{
#define TX_STATS(y) stats->tx_ch_stats[0].y = \
ipa3_ctx->uc_ntn_ctx.ntn_uc_stats_mmio->tx_ch_stats[0].y
#define RX_STATS(y) stats->rx_ch_stats[0].y = \
ipa3_ctx->uc_ntn_ctx.ntn_uc_stats_mmio->rx_ch_stats[0].y
if (unlikely(!ipa3_ctx)) {
IPAERR("IPA driver was not initialized\n");
return -EINVAL;
}
if (!stats || !ipa3_ctx->uc_ntn_ctx.ntn_uc_stats_mmio) {
IPAERR("bad parms stats=%p ntn_stats=%p\n",
stats,
ipa3_ctx->uc_ntn_ctx.ntn_uc_stats_mmio);
return -EINVAL;
}
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
TX_STATS(num_pkts_processed);
TX_STATS(tail_ptr_val);
TX_STATS(num_db_fired);
TX_STATS(tx_comp_ring_stats.ringFull);
TX_STATS(tx_comp_ring_stats.ringEmpty);
TX_STATS(tx_comp_ring_stats.ringUsageHigh);
TX_STATS(tx_comp_ring_stats.ringUsageLow);
TX_STATS(tx_comp_ring_stats.RingUtilCount);
TX_STATS(bam_stats.bamFifoFull);
TX_STATS(bam_stats.bamFifoEmpty);
TX_STATS(bam_stats.bamFifoUsageHigh);
TX_STATS(bam_stats.bamFifoUsageLow);
TX_STATS(bam_stats.bamUtilCount);
TX_STATS(num_db);
TX_STATS(num_unexpected_db);
TX_STATS(num_bam_int_handled);
TX_STATS(num_bam_int_in_non_running_state);
TX_STATS(num_qmb_int_handled);
TX_STATS(num_bam_int_handled_while_wait_for_bam);
TX_STATS(num_bam_int_handled_while_not_in_bam);
RX_STATS(max_outstanding_pkts);
RX_STATS(num_pkts_processed);
RX_STATS(rx_ring_rp_value);
RX_STATS(rx_ind_ring_stats.ringFull);
RX_STATS(rx_ind_ring_stats.ringEmpty);
RX_STATS(rx_ind_ring_stats.ringUsageHigh);
RX_STATS(rx_ind_ring_stats.ringUsageLow);
RX_STATS(rx_ind_ring_stats.RingUtilCount);
RX_STATS(bam_stats.bamFifoFull);
RX_STATS(bam_stats.bamFifoEmpty);
RX_STATS(bam_stats.bamFifoUsageHigh);
RX_STATS(bam_stats.bamFifoUsageLow);
RX_STATS(bam_stats.bamUtilCount);
RX_STATS(num_bam_int_handled);
RX_STATS(num_db);
RX_STATS(num_unexpected_db);
RX_STATS(num_pkts_in_dis_uninit_state);
RX_STATS(num_bam_int_handled_while_not_in_bam);
RX_STATS(num_bam_int_handled_while_in_bam_state);
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
return 0;
}
int ipa3_ntn_init(void)
{
struct ipa3_uc_hdlrs uc_ntn_cbs = { 0 };
uc_ntn_cbs.ipa_uc_event_hdlr = ipa3_uc_ntn_event_handler;
uc_ntn_cbs.ipa_uc_event_log_info_hdlr =
ipa3_uc_ntn_event_log_info_handler;
ipa3_uc_register_handlers(IPA_HW_FEATURE_NTN, &uc_ntn_cbs);
return 0;
}
static int ipa3_uc_send_ntn_setup_pipe_cmd(
struct ipa_ntn_setup_info *ntn_info, u8 dir)
{
int ipa_ep_idx;
int result = 0;
struct ipa_mem_buffer cmd;
struct Ipa3HwNtnSetUpCmdData_t *Ntn_params;
struct IpaHwOffloadSetUpCmdData_t *cmd_data;
if (ntn_info == NULL) {
IPAERR("invalid input\n");
return -EINVAL;
}
ipa_ep_idx = ipa_get_ep_mapping(ntn_info->client);
if (ipa_ep_idx == -1) {
IPAERR("fail to get ep idx.\n");
return -EFAULT;
}
IPADBG("client=%d ep=%d\n", ntn_info->client, ipa_ep_idx);
IPADBG("ring_base_pa = 0x%pa\n",
&ntn_info->ring_base_pa);
IPADBG("ntn_ring_size = %d\n", ntn_info->ntn_ring_size);
IPADBG("buff_pool_base_pa = 0x%pa\n", &ntn_info->buff_pool_base_pa);
IPADBG("num_buffers = %d\n", ntn_info->num_buffers);
IPADBG("data_buff_size = %d\n", ntn_info->data_buff_size);
IPADBG("tail_ptr_base_pa = 0x%pa\n", &ntn_info->ntn_reg_base_ptr_pa);
cmd.size = sizeof(*cmd_data);
cmd.base = dma_alloc_coherent(ipa3_ctx->uc_pdev, cmd.size,
&cmd.phys_base, GFP_KERNEL);
if (cmd.base == NULL) {
IPAERR("fail to get DMA memory.\n");
return -ENOMEM;
}
cmd_data = (struct IpaHwOffloadSetUpCmdData_t *)cmd.base;
cmd_data->protocol = IPA_HW_FEATURE_NTN;
Ntn_params = &cmd_data->SetupCh_params.NtnSetupCh_params;
Ntn_params->ring_base_pa = ntn_info->ring_base_pa;
Ntn_params->buff_pool_base_pa = ntn_info->buff_pool_base_pa;
Ntn_params->ntn_ring_size = ntn_info->ntn_ring_size;
Ntn_params->num_buffers = ntn_info->num_buffers;
Ntn_params->ntn_reg_base_ptr_pa = ntn_info->ntn_reg_base_ptr_pa;
Ntn_params->data_buff_size = ntn_info->data_buff_size;
Ntn_params->ipa_pipe_number = ipa_ep_idx;
Ntn_params->dir = dir;
result = ipa3_uc_send_cmd((u32)(cmd.phys_base),
IPA_CPU_2_HW_CMD_OFFLOAD_CHANNEL_SET_UP,
IPA_HW_2_CPU_OFFLOAD_CMD_STATUS_SUCCESS,
false, 10*HZ);
if (result)
result = -EFAULT;
dma_free_coherent(ipa3_ctx->uc_pdev, cmd.size, cmd.base, cmd.phys_base);
return result;
}
/**
* ipa3_setup_uc_ntn_pipes() - setup uc offload pipes
*/
int ipa3_setup_uc_ntn_pipes(struct ipa_ntn_conn_in_params *in,
ipa_notify_cb notify, void *priv, u8 hdr_len,
struct ipa_ntn_conn_out_params *outp)
{
struct ipa3_ep_context *ep_ul;
struct ipa3_ep_context *ep_dl;
int ipa_ep_idx_ul;
int ipa_ep_idx_dl;
int result = 0;
if (in == NULL) {
IPAERR("invalid input\n");
return -EINVAL;
}
ipa_ep_idx_ul = ipa_get_ep_mapping(in->ul.client);
ipa_ep_idx_dl = ipa_get_ep_mapping(in->dl.client);
if (ipa_ep_idx_ul == -1 || ipa_ep_idx_dl == -1) {
IPAERR("fail to alloc EP.\n");
return -EFAULT;
}
ep_ul = &ipa3_ctx->ep[ipa_ep_idx_ul];
ep_dl = &ipa3_ctx->ep[ipa_ep_idx_dl];
if (ep_ul->valid || ep_dl->valid) {
IPAERR("EP already allocated.\n");
return -EFAULT;
}
memset(ep_ul, 0, offsetof(struct ipa3_ep_context, sys));
memset(ep_dl, 0, offsetof(struct ipa3_ep_context, sys));
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
/* setup ul ep cfg */
ep_ul->valid = 1;
ep_ul->client = in->ul.client;
result = ipa3_enable_data_path(ipa_ep_idx_ul);
if (result) {
IPAERR("disable data path failed res=%d clnt=%d.\n", result,
ipa_ep_idx_ul);
return -EFAULT;
}
ep_ul->client_notify = notify;
ep_ul->priv = priv;
memset(&ep_ul->cfg, 0, sizeof(ep_ul->cfg));
ep_ul->cfg.nat.nat_en = IPA_SRC_NAT;
ep_ul->cfg.hdr.hdr_len = hdr_len;
ep_ul->cfg.mode.mode = IPA_BASIC;
if (ipa3_cfg_ep(ipa_ep_idx_ul, &ep_ul->cfg)) {
IPAERR("fail to setup ul pipe cfg\n");
result = -EFAULT;
goto fail;
}
if (ipa3_uc_send_ntn_setup_pipe_cmd(&in->ul, IPA_NTN_RX_DIR)) {
IPAERR("fail to send cmd to uc for ul pipe\n");
result = -EFAULT;
goto fail;
}
ipa3_install_dflt_flt_rules(ipa_ep_idx_ul);
outp->ul_uc_db_pa = IPA_UC_NTN_DB_PA_RX;
ep_ul->uc_offload_state |= IPA_UC_OFFLOAD_CONNECTED;
IPADBG("client %d (ep: %d) connected\n", in->ul.client,
ipa_ep_idx_ul);
/* setup dl ep cfg */
ep_dl->valid = 1;
ep_dl->client = in->dl.client;
result = ipa3_enable_data_path(ipa_ep_idx_dl);
if (result) {
IPAERR("disable data path failed res=%d clnt=%d.\n", result,
ipa_ep_idx_dl);
result = -EFAULT;
goto fail;
}
memset(&ep_dl->cfg, 0, sizeof(ep_ul->cfg));
ep_dl->cfg.nat.nat_en = IPA_BYPASS_NAT;
ep_dl->cfg.hdr.hdr_len = hdr_len;
ep_dl->cfg.mode.mode = IPA_BASIC;
if (ipa3_cfg_ep(ipa_ep_idx_dl, &ep_dl->cfg)) {
IPAERR("fail to setup dl pipe cfg\n");
result = -EFAULT;
goto fail;
}
if (ipa3_uc_send_ntn_setup_pipe_cmd(&in->dl, IPA_NTN_TX_DIR)) {
IPAERR("fail to send cmd to uc for dl pipe\n");
result = -EFAULT;
goto fail;
}
outp->dl_uc_db_pa = IPA_UC_NTN_DB_PA_TX;
ep_dl->uc_offload_state |= IPA_UC_OFFLOAD_CONNECTED;
IPADBG("client %d (ep: %d) connected\n", in->dl.client,
ipa_ep_idx_dl);
fail:
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
return result;
}
/**
* ipa3_tear_down_uc_offload_pipes() - tear down uc offload pipes
*/
int ipa3_tear_down_uc_offload_pipes(int ipa_ep_idx_ul,
int ipa_ep_idx_dl)
{
struct ipa_mem_buffer cmd;
struct ipa3_ep_context *ep_ul, *ep_dl;
struct IpaHwOffloadCommonChCmdData_t *cmd_data;
union Ipa3HwNtnCommonChCmdData_t *tear;
int result = 0;
IPADBG("ep_ul = %d\n", ipa_ep_idx_ul);
IPADBG("ep_dl = %d\n", ipa_ep_idx_dl);
ep_ul = &ipa3_ctx->ep[ipa_ep_idx_ul];
ep_dl = &ipa3_ctx->ep[ipa_ep_idx_dl];
if (ep_ul->uc_offload_state != IPA_UC_OFFLOAD_CONNECTED ||
ep_dl->uc_offload_state != IPA_UC_OFFLOAD_CONNECTED) {
IPAERR("channel bad state: ul %d dl %d\n",
ep_ul->uc_offload_state, ep_dl->uc_offload_state);
return -EFAULT;
}
cmd.size = sizeof(*cmd_data);
cmd.base = dma_alloc_coherent(ipa3_ctx->uc_pdev, cmd.size,
&cmd.phys_base, GFP_KERNEL);
if (cmd.base == NULL) {
IPAERR("fail to get DMA memory.\n");
return -ENOMEM;
}
IPA_ACTIVE_CLIENTS_INC_SIMPLE();
/* teardown the UL pipe */
cmd_data = (struct IpaHwOffloadCommonChCmdData_t *)cmd.base;
cmd_data->protocol = IPA_HW_FEATURE_NTN;
tear = &cmd_data->CommonCh_params.NtnCommonCh_params;
tear->params.ipa_pipe_number = ipa_ep_idx_ul;
result = ipa3_uc_send_cmd((u32)(cmd.phys_base),
IPA_CPU_2_HW_CMD_OFFLOAD_TEAR_DOWN,
IPA_HW_2_CPU_OFFLOAD_CMD_STATUS_SUCCESS,
false, 10*HZ);
if (result) {
IPAERR("fail to tear down ul pipe\n");
result = -EFAULT;
goto fail;
}
ipa3_disable_data_path(ipa_ep_idx_ul);
ipa3_delete_dflt_flt_rules(ipa_ep_idx_ul);
memset(&ipa3_ctx->ep[ipa_ep_idx_ul], 0, sizeof(struct ipa3_ep_context));
IPADBG("ul client (ep: %d) disconnected\n", ipa_ep_idx_ul);
/* teardown the DL pipe */
tear->params.ipa_pipe_number = ipa_ep_idx_dl;
result = ipa3_uc_send_cmd((u32)(cmd.phys_base),
IPA_CPU_2_HW_CMD_OFFLOAD_TEAR_DOWN,
IPA_HW_2_CPU_OFFLOAD_CMD_STATUS_SUCCESS,
false, 10*HZ);
if (result) {
IPAERR("fail to tear down ul pipe\n");
result = -EFAULT;
goto fail;
}
ipa3_disable_data_path(ipa_ep_idx_dl);
memset(&ipa3_ctx->ep[ipa_ep_idx_dl], 0, sizeof(struct ipa3_ep_context));
IPADBG("dl client (ep: %d) disconnected\n", ipa_ep_idx_dl);
fail:
dma_free_coherent(ipa3_ctx->uc_pdev, cmd.size, cmd.base, cmd.phys_base);
IPA_ACTIVE_CLIENTS_DEC_SIMPLE();
return result;
}

View File

@@ -0,0 +1,580 @@
/* Copyright (c) 2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _IPA_UC_OFFLOAD_I_H_
#define _IPA_UC_OFFLOAD_I_H_
#include <linux/ipa.h>
#include "ipa_i.h"
/*
* Neutrino protocol related data structures
*/
#define IPA_UC_MAX_NTN_TX_CHANNELS 1
#define IPA_UC_MAX_NTN_RX_CHANNELS 1
#define IPA_NTN_TX_DIR 1
#define IPA_NTN_RX_DIR 2
/**
* @brief Enum value determined based on the feature it
* corresponds to
* +----------------+----------------+
* | 3 bits | 5 bits |
* +----------------+----------------+
* | HW_FEATURE | OPCODE |
* +----------------+----------------+
*
*/
#define FEATURE_ENUM_VAL(feature, opcode) ((feature << 5) | opcode)
#define EXTRACT_UC_FEATURE(value) (value >> 5)
#define IPA_HW_NUM_FEATURES 0x8
/**
* enum ipa3_hw_features - Values that represent the features supported
* in IPA HW
* @IPA_HW_FEATURE_COMMON : Feature related to common operation of IPA HW
* @IPA_HW_FEATURE_MHI : Feature related to MHI operation in IPA HW
* @IPA_HW_FEATURE_POWER_COLLAPSE: Feature related to IPA Power collapse
* @IPA_HW_FEATURE_WDI : Feature related to WDI operation in IPA HW
* @IPA_HW_FEATURE_ZIP: Feature related to CMP/DCMP operation in IPA HW
* @IPA_HW_FEATURE_NTN : Feature related to NTN operation in IPA HW
* @IPA_HW_FEATURE_OFFLOAD : Feature related to NTN operation in IPA HW
*/
enum ipa3_hw_features {
IPA_HW_FEATURE_COMMON = 0x0,
IPA_HW_FEATURE_MHI = 0x1,
IPA_HW_FEATURE_POWER_COLLAPSE = 0x2,
IPA_HW_FEATURE_WDI = 0x3,
IPA_HW_FEATURE_ZIP = 0x4,
IPA_HW_FEATURE_NTN = 0x5,
IPA_HW_FEATURE_OFFLOAD = 0x6,
IPA_HW_FEATURE_MAX = IPA_HW_NUM_FEATURES
};
/**
* enum ipa3_hw_2_cpu_events - Values that represent HW event to be sent to CPU.
* @IPA_HW_2_CPU_EVENT_NO_OP : No event present
* @IPA_HW_2_CPU_EVENT_ERROR : Event specify a system error is detected by the
* device
* @IPA_HW_2_CPU_EVENT_LOG_INFO : Event providing logging specific information
*/
enum ipa3_hw_2_cpu_events {
IPA_HW_2_CPU_EVENT_NO_OP =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 0),
IPA_HW_2_CPU_EVENT_ERROR =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 1),
IPA_HW_2_CPU_EVENT_LOG_INFO =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 2),
};
/**
* enum ipa3_hw_errors - Common error types.
* @IPA_HW_ERROR_NONE : No error persists
* @IPA_HW_INVALID_DOORBELL_ERROR : Invalid data read from doorbell
* @IPA_HW_DMA_ERROR : Unexpected DMA error
* @IPA_HW_FATAL_SYSTEM_ERROR : HW has crashed and requires reset.
* @IPA_HW_INVALID_OPCODE : Invalid opcode sent
* @IPA_HW_INVALID_PARAMS : Invalid params for the requested command
* @IPA_HW_GSI_CH_NOT_EMPTY_FAILURE : GSI channel emptiness validation failed
*/
enum ipa3_hw_errors {
IPA_HW_ERROR_NONE =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 0),
IPA_HW_INVALID_DOORBELL_ERROR =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 1),
IPA_HW_DMA_ERROR =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 2),
IPA_HW_FATAL_SYSTEM_ERROR =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 3),
IPA_HW_INVALID_OPCODE =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 4),
IPA_HW_INVALID_PARAMS =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 5),
IPA_HW_CONS_DISABLE_CMD_GSI_STOP_FAILURE =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 6),
IPA_HW_PROD_DISABLE_CMD_GSI_STOP_FAILURE =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 7),
IPA_HW_GSI_CH_NOT_EMPTY_FAILURE =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_COMMON, 8)
};
/**
* struct IpaHwSharedMemCommonMapping_t - Structure referring to the common
* section in 128B shared memory located in offset zero of SW Partition in IPA
* SRAM.
* @cmdOp : CPU->HW command opcode. See IPA_CPU_2_HW_COMMANDS
* @cmdParams : CPU->HW command parameter lower 32bit.
* @cmdParams_hi : CPU->HW command parameter higher 32bit.
* of parameters (immediate parameters) and point on structure in system memory
* (in such case the address must be accessible for HW)
* @responseOp : HW->CPU response opcode. See IPA_HW_2_CPU_RESPONSES
* @responseParams : HW->CPU response parameter. The parameter filed can hold 32
* bits of parameters (immediate parameters) and point on structure in system
* memory
* @eventOp : HW->CPU event opcode. See IPA_HW_2_CPU_EVENTS
* @eventParams : HW->CPU event parameter. The parameter filed can hold 32
* bits of parameters (immediate parameters) and point on
* structure in system memory
* @firstErrorAddress : Contains the address of first error-source on SNOC
* @hwState : State of HW. The state carries information regarding the
* error type.
* @warningCounter : The warnings counter. The counter carries information
* regarding non fatal errors in HW
* @interfaceVersionCommon : The Common interface version as reported by HW
*
* The shared memory is used for communication between IPA HW and CPU.
*/
struct IpaHwSharedMemCommonMapping_t {
u8 cmdOp;
u8 reserved_01;
u16 reserved_03_02;
u32 cmdParams;
u32 cmdParams_hi;
u8 responseOp;
u8 reserved_0D;
u16 reserved_0F_0E;
u32 responseParams;
u8 eventOp;
u8 reserved_15;
u16 reserved_17_16;
u32 eventParams;
u32 firstErrorAddress;
u8 hwState;
u8 warningCounter;
u16 reserved_23_22;
u16 interfaceVersionCommon;
u16 reserved_27_26;
} __packed;
/**
* union Ipa3HwFeatureInfoData_t - parameters for stats/config blob
*
* @offset : Location of a feature within the EventInfoData
* @size : Size of the feature
*/
union Ipa3HwFeatureInfoData_t {
struct IpaHwFeatureInfoParams_t {
u32 offset:16;
u32 size:16;
} __packed params;
u32 raw32b;
} __packed;
/**
* union IpaHwErrorEventData_t - HW->CPU Common Events
* @errorType : Entered when a system error is detected by the HW. Type of
* error is specified by IPA_HW_ERRORS
* @reserved : Reserved
*/
union IpaHwErrorEventData_t {
struct IpaHwErrorEventParams_t {
u32 errorType:8;
u32 reserved:24;
} __packed params;
u32 raw32b;
} __packed;
/**
* struct Ipa3HwEventInfoData_t - Structure holding the parameters for
* statistics and config info
*
* @baseAddrOffset : Base Address Offset of the statistics or config
* structure from IPA_WRAPPER_BASE
* @Ipa3HwFeatureInfoData_t : Location and size of each feature within
* the statistics or config structure
*
* @note Information about each feature in the featureInfo[]
* array is populated at predefined indices per the IPA_HW_FEATURES
* enum definition
*/
struct Ipa3HwEventInfoData_t {
u32 baseAddrOffset;
union Ipa3HwFeatureInfoData_t featureInfo[IPA_HW_NUM_FEATURES];
} __packed;
/**
* struct IpaHwEventLogInfoData_t - Structure holding the parameters for
* IPA_HW_2_CPU_EVENT_LOG_INFO Event
*
* @featureMask : Mask indicating the features enabled in HW.
* Refer IPA_HW_FEATURE_MASK
* @circBuffBaseAddrOffset : Base Address Offset of the Circular Event
* Log Buffer structure
* @statsInfo : Statistics related information
* @configInfo : Configuration related information
*
* @note The offset location of this structure from IPA_WRAPPER_BASE
* will be provided as Event Params for the IPA_HW_2_CPU_EVENT_LOG_INFO
* Event
*/
struct IpaHwEventLogInfoData_t {
u32 featureMask;
u32 circBuffBaseAddrOffset;
struct Ipa3HwEventInfoData_t statsInfo;
struct Ipa3HwEventInfoData_t configInfo;
} __packed;
/**
* struct ipa3_uc_ntn_ctx
* @ntn_uc_stats_ofst: Neutrino stats offset
* @ntn_uc_stats_mmio: Neutrino stats
* @priv: private data of client
* @uc_ready_cb: uc Ready cb
*/
struct ipa3_uc_ntn_ctx {
u32 ntn_uc_stats_ofst;
struct Ipa3HwStatsNTNInfoData_t *ntn_uc_stats_mmio;
void *priv;
ipa_uc_ready_cb uc_ready_cb;
};
/**
* enum ipa3_hw_2_cpu_ntn_events - Values that represent HW event
* to be sent to CPU
* @IPA_HW_2_CPU_EVENT_NTN_ERROR : Event to specify that HW
* detected an error in NTN
*
*/
enum ipa3_hw_2_cpu_ntn_events {
IPA_HW_2_CPU_EVENT_NTN_ERROR =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_NTN, 0),
};
/**
* enum ipa3_hw_ntn_errors - NTN specific error types.
* @IPA_HW_NTN_ERROR_NONE : No error persists
* @IPA_HW_NTN_CHANNEL_ERROR : Error is specific to channel
*/
enum ipa3_hw_ntn_errors {
IPA_HW_NTN_ERROR_NONE = 0,
IPA_HW_NTN_CHANNEL_ERROR = 1
};
/**
* enum ipa3_hw_ntn_channel_states - Values that represent NTN
* channel state machine.
* @IPA_HW_NTN_CHANNEL_STATE_INITED_DISABLED : Channel is
* initialized but disabled
* @IPA_HW_NTN_CHANNEL_STATE_RUNNING : Channel is running.
* Entered after SET_UP_COMMAND is processed successfully
* @IPA_HW_NTN_CHANNEL_STATE_ERROR : Channel is in error state
* @IPA_HW_NTN_CHANNEL_STATE_INVALID : Invalid state. Shall not
* be in use in operational scenario
*
* These states apply to both Tx and Rx paths. These do not reflect the
* sub-state the state machine may be in.
*/
enum ipa3_hw_ntn_channel_states {
IPA_HW_NTN_CHANNEL_STATE_INITED_DISABLED = 1,
IPA_HW_NTN_CHANNEL_STATE_RUNNING = 2,
IPA_HW_NTN_CHANNEL_STATE_ERROR = 3,
IPA_HW_NTN_CHANNEL_STATE_INVALID = 0xFF
};
/**
* enum ipa3_hw_ntn_channel_errors - List of NTN Channel error
* types. This is present in the event param
* @IPA_HW_NTN_CH_ERR_NONE: No error persists
* @IPA_HW_NTN_TX_FSM_ERROR: Error in the state machine
* transition
* @IPA_HW_NTN_TX_COMP_RE_FETCH_FAIL: Error while calculating
* num RE to bring
* @IPA_HW_NTN_RX_RING_WP_UPDATE_FAIL: Write pointer update
* failed in Rx ring
* @IPA_HW_NTN_RX_FSM_ERROR: Error in the state machine
* transition
* @IPA_HW_NTN_RX_CACHE_NON_EMPTY:
* @IPA_HW_NTN_CH_ERR_RESERVED:
*
* These states apply to both Tx and Rx paths. These do not
* reflect the sub-state the state machine may be in.
*/
enum ipa3_hw_ntn_channel_errors {
IPA_HW_NTN_CH_ERR_NONE = 0,
IPA_HW_NTN_TX_RING_WP_UPDATE_FAIL = 1,
IPA_HW_NTN_TX_FSM_ERROR = 2,
IPA_HW_NTN_TX_COMP_RE_FETCH_FAIL = 3,
IPA_HW_NTN_RX_RING_WP_UPDATE_FAIL = 4,
IPA_HW_NTN_RX_FSM_ERROR = 5,
IPA_HW_NTN_RX_CACHE_NON_EMPTY = 6,
IPA_HW_NTN_CH_ERR_RESERVED = 0xFF
};
/**
* struct Ipa3HwNtnSetUpCmdData_t - Ntn setup command data
* @ring_base_pa: physical address of the base of the Tx/Rx NTN
* ring
* @buff_pool_base_pa: physical address of the base of the Tx/Rx
* buffer pool
* @ntn_ring_size: size of the Tx/Rx NTN ring
* @num_buffers: Rx/tx buffer pool size
* @ntn_reg_base_ptr_pa: physical address of the Tx/Rx NTN
* Ring's tail pointer
* @ipa_pipe_number: IPA pipe number that has to be used for the
* Tx/Rx path
* @dir: Tx/Rx Direction
* @data_buff_size: size of the each data buffer allocated in
* DDR
*/
struct Ipa3HwNtnSetUpCmdData_t {
u32 ring_base_pa;
u32 buff_pool_base_pa;
u16 ntn_ring_size;
u16 num_buffers;
u32 ntn_reg_base_ptr_pa;
u8 ipa_pipe_number;
u8 dir;
u16 data_buff_size;
} __packed;
/**
* struct Ipa3HwNtnCommonChCmdData_t - Structure holding the
* parameters for Ntn Tear down command data params
*
*@ipa_pipe_number: IPA pipe number. This could be Tx or an Rx pipe
*/
union Ipa3HwNtnCommonChCmdData_t {
struct IpaHwNtnCommonChCmdParams_t {
u32 ipa_pipe_number :8;
u32 reserved :24;
} __packed params;
uint32_t raw32b;
} __packed;
/**
* struct Ipa3HwNTNErrorEventData_t - Structure holding the
* IPA_HW_2_CPU_EVENT_NTN_ERROR event. The parameters are passed
* as immediate params in the shared memory
*
*@ntn_error_type: type of NTN error (ipa3_hw_ntn_errors)
*@ipa_pipe_number: IPA pipe number on which error has happened
* Applicable only if error type indicates channel error
*@ntn_ch_err_type: Information about the channel error (if
* available)
*/
union Ipa3HwNTNErrorEventData_t {
struct IpaHwNTNErrorEventParams_t {
u32 ntn_error_type :8;
u32 reserved :8;
u32 ipa_pipe_number :8;
u32 ntn_ch_err_type :8;
} __packed params;
uint32_t raw32b;
} __packed;
/**
* struct NTN3RxInfoData_t - NTN Structure holding the Rx pipe
* information
*
*@max_outstanding_pkts: Number of outstanding packets in Rx
* Ring
*@num_pkts_processed: Number of packets processed - cumulative
*@rx_ring_rp_value: Read pointer last advertized to the WLAN FW
*
*@ntn_ch_err_type: Information about the channel error (if
* available)
*@rx_ind_ring_stats:
*@bam_stats:
*@num_bam_int_handled: Number of Bam Interrupts handled by FW
*@num_db: Number of times the doorbell was rung
*@num_unexpected_db: Number of unexpected doorbells
*@num_pkts_in_dis_uninit_state:
*@num_bam_int_handled_while_not_in_bam: Number of Bam
* Interrupts handled by FW
*@num_bam_int_handled_while_in_bam_state: Number of Bam
* Interrupts handled by FW
*/
struct NTN3RxInfoData_t {
u32 max_outstanding_pkts;
u32 num_pkts_processed;
u32 rx_ring_rp_value;
struct IpaHwRingStats_t rx_ind_ring_stats;
struct IpaHwBamStats_t bam_stats;
u32 num_bam_int_handled;
u32 num_db;
u32 num_unexpected_db;
u32 num_pkts_in_dis_uninit_state;
u32 num_bam_int_handled_while_not_in_bam;
u32 num_bam_int_handled_while_in_bam_state;
} __packed;
/**
* struct NTNTxInfoData_t - Structure holding the NTN Tx channel
* Ensure that this is always word aligned
*
*@num_pkts_processed: Number of packets processed - cumulative
*@tail_ptr_val: Latest value of doorbell written to copy engine
*@num_db_fired: Number of DB from uC FW to Copy engine
*
*@tx_comp_ring_stats:
*@bam_stats:
*@num_db: Number of times the doorbell was rung
*@num_unexpected_db: Number of unexpected doorbells
*@num_bam_int_handled: Number of Bam Interrupts handled by FW
*@num_bam_int_in_non_running_state: Number of Bam interrupts
* while not in Running state
*@num_qmb_int_handled: Number of QMB interrupts handled
*@num_bam_int_handled_while_wait_for_bam: Number of times the
* Imm Cmd is injected due to fw_desc change
*/
struct NTNTxInfoData_t {
u32 num_pkts_processed;
u32 tail_ptr_val;
u32 num_db_fired;
struct IpaHwRingStats_t tx_comp_ring_stats;
struct IpaHwBamStats_t bam_stats;
u32 num_db;
u32 num_unexpected_db;
u32 num_bam_int_handled;
u32 num_bam_int_in_non_running_state;
u32 num_qmb_int_handled;
u32 num_bam_int_handled_while_wait_for_bam;
u32 num_bam_int_handled_while_not_in_bam;
} __packed;
/**
* struct Ipa3HwStatsNTNInfoData_t - Structure holding the NTN Tx
* channel Ensure that this is always word aligned
*
*/
struct Ipa3HwStatsNTNInfoData_t {
struct NTN3RxInfoData_t rx_ch_stats[IPA_UC_MAX_NTN_RX_CHANNELS];
struct NTNTxInfoData_t tx_ch_stats[IPA_UC_MAX_NTN_TX_CHANNELS];
} __packed;
/*
* uC offload related data structures
*/
#define IPA_UC_OFFLOAD_CONNECTED BIT(0)
#define IPA_UC_OFFLOAD_ENABLED BIT(1)
#define IPA_UC_OFFLOAD_RESUMED BIT(2)
/**
* enum ipa_cpu_2_hw_offload_commands - Values that represent
* the offload commands from CPU
* @IPA_CPU_2_HW_CMD_OFFLOAD_CHANNEL_SET_UP : Command to set up
* Offload protocol's Tx/Rx Path
* @IPA_CPU_2_HW_CMD_OFFLOAD_RX_SET_UP : Command to tear down
* Offload protocol's Tx/ Rx Path
*/
enum ipa_cpu_2_hw_offload_commands {
IPA_CPU_2_HW_CMD_OFFLOAD_CHANNEL_SET_UP =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 1),
IPA_CPU_2_HW_CMD_OFFLOAD_TEAR_DOWN,
};
/**
* enum ipa3_hw_offload_channel_states - Values that represent
* offload channel state machine.
* @IPA_HW_OFFLOAD_CHANNEL_STATE_INITED_DISABLED : Channel is
* initialized but disabled
* @IPA_HW_OFFLOAD_CHANNEL_STATE_RUNNING : Channel is running.
* Entered after SET_UP_COMMAND is processed successfully
* @IPA_HW_OFFLOAD_CHANNEL_STATE_ERROR : Channel is in error state
* @IPA_HW_OFFLOAD_CHANNEL_STATE_INVALID : Invalid state. Shall not
* be in use in operational scenario
*
* These states apply to both Tx and Rx paths. These do not
* reflect the sub-state the state machine may be in
*/
enum ipa3_hw_offload_channel_states {
IPA_HW_OFFLOAD_CHANNEL_STATE_INITED_DISABLED = 1,
IPA_HW_OFFLOAD_CHANNEL_STATE_RUNNING = 2,
IPA_HW_OFFLOAD_CHANNEL_STATE_ERROR = 3,
IPA_HW_OFFLOAD_CHANNEL_STATE_INVALID = 0xFF
};
/**
* enum ipa3_hw_2_cpu_cmd_resp_status - Values that represent
* offload related command response status to be sent to CPU.
*/
enum ipa3_hw_2_cpu_offload_cmd_resp_status {
IPA_HW_2_CPU_OFFLOAD_CMD_STATUS_SUCCESS =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 0),
IPA_HW_2_CPU_OFFLOAD_MAX_TX_CHANNELS =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 1),
IPA_HW_2_CPU_OFFLOAD_TX_RING_OVERRUN_POSSIBILITY =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 2),
IPA_HW_2_CPU_OFFLOAD_TX_RING_SET_UP_FAILURE =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 3),
IPA_HW_2_CPU_OFFLOAD_TX_RING_PARAMS_UNALIGNED =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 4),
IPA_HW_2_CPU_OFFLOAD_UNKNOWN_TX_CHANNEL =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 5),
IPA_HW_2_CPU_OFFLOAD_TX_INVALID_FSM_TRANSITION =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 6),
IPA_HW_2_CPU_OFFLOAD_TX_FSM_TRANSITION_ERROR =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 7),
IPA_HW_2_CPU_OFFLOAD_MAX_RX_CHANNELS =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 8),
IPA_HW_2_CPU_OFFLOAD_RX_RING_PARAMS_UNALIGNED =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 9),
IPA_HW_2_CPU_OFFLOAD_RX_RING_SET_UP_FAILURE =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 10),
IPA_HW_2_CPU_OFFLOAD_UNKNOWN_RX_CHANNEL =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 11),
IPA_HW_2_CPU_OFFLOAD_RX_INVALID_FSM_TRANSITION =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 12),
IPA_HW_2_CPU_OFFLOAD_RX_FSM_TRANSITION_ERROR =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 13),
IPA_HW_2_CPU_OFFLOAD_RX_RING_OVERRUN_POSSIBILITY =
FEATURE_ENUM_VAL(IPA_HW_FEATURE_OFFLOAD, 14),
};
/**
* struct IpaHwSetUpCmd -
*
*
*/
union IpaHwSetUpCmd {
struct Ipa3HwNtnSetUpCmdData_t NtnSetupCh_params;
} __packed;
/**
* struct IpaHwOffloadSetUpCmdData_t -
*
*
*/
struct IpaHwOffloadSetUpCmdData_t {
u8 protocol;
union IpaHwSetUpCmd SetupCh_params;
} __packed;
/**
* struct IpaHwCommonChCmd - Structure holding the parameters
* for IPA_CPU_2_HW_CMD_OFFLOAD_TEAR_DOWN
*
*
*/
union IpaHwCommonChCmd {
union Ipa3HwNtnCommonChCmdData_t NtnCommonCh_params;
} __packed;
struct IpaHwOffloadCommonChCmdData_t {
u8 protocol;
union IpaHwCommonChCmd CommonCh_params;
} __packed;
#endif /* _IPA_UC_OFFLOAD_I_H_ */

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,3 @@
obj-$(CONFIG_IPA3) += ipa_hal.o
ipa_hal-y := ipahal.o ipahal_reg.o ipahal_fltrt.o

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,642 @@
/* Copyright (c) 2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _IPAHAL_H_
#define _IPAHAL_H_
#include <linux/msm_ipa.h>
#include "../../ipa_common_i.h"
/*
* Immediate command names
*
* NOTE:: Any change to this enum, need to change to ipahal_imm_cmd_name_to_str
* array as well.
*/
enum ipahal_imm_cmd_name {
IPA_IMM_CMD_IP_V4_FILTER_INIT,
IPA_IMM_CMD_IP_V6_FILTER_INIT,
IPA_IMM_CMD_IP_V4_NAT_INIT,
IPA_IMM_CMD_IP_V4_ROUTING_INIT,
IPA_IMM_CMD_IP_V6_ROUTING_INIT,
IPA_IMM_CMD_HDR_INIT_LOCAL,
IPA_IMM_CMD_HDR_INIT_SYSTEM,
IPA_IMM_CMD_REGISTER_WRITE,
IPA_IMM_CMD_NAT_DMA,
IPA_IMM_CMD_IP_PACKET_INIT,
IPA_IMM_CMD_DMA_SHARED_MEM,
IPA_IMM_CMD_IP_PACKET_TAG_STATUS,
IPA_IMM_CMD_DMA_TASK_32B_ADDR,
IPA_IMM_CMD_MAX,
};
/* Immediate commands abstracted structures */
/*
* struct ipahal_imm_cmd_ip_v4_filter_init - IP_V4_FILTER_INIT cmd payload
* Inits IPv4 filter block.
* @hash_rules_addr: Addr in sys mem where ipv4 hashable flt tbl starts
* @hash_rules_size: Size in bytes of the hashable tbl to cpy to local mem
* @hash_local_addr: Addr in shared mem where ipv4 hashable flt tbl should
* be copied to
* @nhash_rules_addr: Addr in sys mem where ipv4 non-hashable flt tbl starts
* @nhash_rules_size: Size in bytes of the non-hashable tbl to cpy to local mem
* @nhash_local_addr: Addr in shared mem where ipv4 non-hashable flt tbl should
* be copied to
*/
struct ipahal_imm_cmd_ip_v4_filter_init {
u64 hash_rules_addr;
u32 hash_rules_size;
u32 hash_local_addr;
u64 nhash_rules_addr;
u32 nhash_rules_size;
u32 nhash_local_addr;
};
/*
* struct ipahal_imm_cmd_ip_v6_filter_init - IP_V6_FILTER_INIT cmd payload
* Inits IPv6 filter block.
* @hash_rules_addr: Addr in sys mem where ipv6 hashable flt tbl starts
* @hash_rules_size: Size in bytes of the hashable tbl to cpy to local mem
* @hash_local_addr: Addr in shared mem where ipv6 hashable flt tbl should
* be copied to
* @nhash_rules_addr: Addr in sys mem where ipv6 non-hashable flt tbl starts
* @nhash_rules_size: Size in bytes of the non-hashable tbl to cpy to local mem
* @nhash_local_addr: Addr in shared mem where ipv6 non-hashable flt tbl should
* be copied to
*/
struct ipahal_imm_cmd_ip_v6_filter_init {
u64 hash_rules_addr;
u32 hash_rules_size;
u32 hash_local_addr;
u64 nhash_rules_addr;
u32 nhash_rules_size;
u32 nhash_local_addr;
};
/*
* struct ipahal_imm_cmd_ip_v4_nat_init - IP_V4_NAT_INIT cmd payload
* Inits IPv4 NAT block. Initiate NAT table with it dimensions, location
* cache address abd itger related parameters.
* @table_index: For future support of multiple NAT tables
* @ipv4_rules_addr: Addr in sys/shared mem where ipv4 NAT rules start
* @ipv4_rules_addr_shared: ipv4_rules_addr in shared mem (if not, then sys)
* @ipv4_expansion_rules_addr: Addr in sys/shared mem where expantion NAT
* table starts. IPv4 NAT rules that result in NAT collision are located
* in this table.
* @ipv4_expansion_rules_addr_shared: ipv4_expansion_rules_addr in
* shared mem (if not, then sys)
* @index_table_addr: Addr in sys/shared mem where index table, which points
* to NAT table starts
* @index_table_addr_shared: index_table_addr in shared mem (if not, then sys)
* @index_table_expansion_addr: Addr in sys/shared mem where expansion index
* table starts
* @index_table_expansion_addr_shared: index_table_expansion_addr in
* shared mem (if not, then sys)
* @size_base_tables: Num of entries in NAT tbl and idx tbl (each)
* @size_expansion_tables: Num of entries in NAT expantion tbl and expantion
* idx tbl (each)
* @public_ip_addr: public IP address
*/
struct ipahal_imm_cmd_ip_v4_nat_init {
u8 table_index;
u64 ipv4_rules_addr;
bool ipv4_rules_addr_shared;
u64 ipv4_expansion_rules_addr;
bool ipv4_expansion_rules_addr_shared;
u64 index_table_addr;
bool index_table_addr_shared;
u64 index_table_expansion_addr;
bool index_table_expansion_addr_shared;
u16 size_base_tables;
u16 size_expansion_tables;
u32 public_ip_addr;
};
/*
* struct ipahal_imm_cmd_ip_v4_routing_init - IP_V4_ROUTING_INIT cmd payload
* Inits IPv4 routing table/structure - with the rules and other related params
* @hash_rules_addr: Addr in sys mem where ipv4 hashable rt tbl starts
* @hash_rules_size: Size in bytes of the hashable tbl to cpy to local mem
* @hash_local_addr: Addr in shared mem where ipv4 hashable rt tbl should
* be copied to
* @nhash_rules_addr: Addr in sys mem where ipv4 non-hashable rt tbl starts
* @nhash_rules_size: Size in bytes of the non-hashable tbl to cpy to local mem
* @nhash_local_addr: Addr in shared mem where ipv4 non-hashable rt tbl should
* be copied to
*/
struct ipahal_imm_cmd_ip_v4_routing_init {
u64 hash_rules_addr;
u32 hash_rules_size;
u32 hash_local_addr;
u64 nhash_rules_addr;
u32 nhash_rules_size;
u32 nhash_local_addr;
};
/*
* struct ipahal_imm_cmd_ip_v6_routing_init - IP_V6_ROUTING_INIT cmd payload
* Inits IPv6 routing table/structure - with the rules and other related params
* @hash_rules_addr: Addr in sys mem where ipv6 hashable rt tbl starts
* @hash_rules_size: Size in bytes of the hashable tbl to cpy to local mem
* @hash_local_addr: Addr in shared mem where ipv6 hashable rt tbl should
* be copied to
* @nhash_rules_addr: Addr in sys mem where ipv6 non-hashable rt tbl starts
* @nhash_rules_size: Size in bytes of the non-hashable tbl to cpy to local mem
* @nhash_local_addr: Addr in shared mem where ipv6 non-hashable rt tbl should
* be copied to
*/
struct ipahal_imm_cmd_ip_v6_routing_init {
u64 hash_rules_addr;
u32 hash_rules_size;
u32 hash_local_addr;
u64 nhash_rules_addr;
u32 nhash_rules_size;
u32 nhash_local_addr;
};
/*
* struct ipahal_imm_cmd_hdr_init_local - HDR_INIT_LOCAL cmd payload
* Inits hdr table within local mem with the hdrs and their length.
* @hdr_table_addr: Word address in sys mem where the table starts (SRC)
* @size_hdr_table: Size of the above (in bytes)
* @hdr_addr: header address in IPA sram (used as DST for memory copy)
* @rsvd: reserved
*/
struct ipahal_imm_cmd_hdr_init_local {
u64 hdr_table_addr;
u32 size_hdr_table;
u32 hdr_addr;
};
/*
* struct ipahal_imm_cmd_hdr_init_system - HDR_INIT_SYSTEM cmd payload
* Inits hdr table within sys mem with the hdrs and their length.
* @hdr_table_addr: Word address in system memory where the hdrs tbl starts.
*/
struct ipahal_imm_cmd_hdr_init_system {
u64 hdr_table_addr;
};
/*
* struct ipahal_imm_cmd_nat_dma - NAT_DMA cmd payload
* Perform DMA operation on NAT related mem addressess. Copy data into
* different locations within NAT associated tbls. (For add/remove NAT rules)
* @table_index: NAT tbl index. Defines the NAT tbl on which to perform DMA op.
* @base_addr: Base addr to which the DMA operation should be performed.
* @offset: offset in bytes from base addr to write 'data' to
* @data: data to be written
*/
struct ipahal_imm_cmd_nat_dma {
u8 table_index;
u8 base_addr;
u32 offset;
u16 data;
};
/*
* struct ipahal_imm_cmd_ip_packet_init - IP_PACKET_INIT cmd payload
* Configuration for specific IP pkt. Shall be called prior to an IP pkt
* data. Pkt will not go through IP pkt processing.
* @destination_pipe_index: Destination pipe index (in case routing
* is enabled, this field will overwrite the rt rule)
*/
struct ipahal_imm_cmd_ip_packet_init {
u32 destination_pipe_index;
};
/*
* enum ipa_pipeline_clear_option - Values for pipeline clear waiting options
* @IPAHAL_HPS_CLEAR: Wait for HPS clear. All queues except high priority queue
* shall not be serviced until HPS is clear of packets or immediate commands.
* The high priority Rx queue / Q6ZIP group shall still be serviced normally.
*
* @IPAHAL_SRC_GRP_CLEAR: Wait for originating source group to be clear
* (for no packet contexts allocated to the originating source group).
* The source group / Rx queue shall not be serviced until all previously
* allocated packet contexts are released. All other source groups/queues shall
* be serviced normally.
*
* @IPAHAL_FULL_PIPELINE_CLEAR: Wait for full pipeline to be clear.
* All groups / Rx queues shall not be serviced until IPA pipeline is fully
* clear. This should be used for debug only.
*/
enum ipahal_pipeline_clear_option {
IPAHAL_HPS_CLEAR,
IPAHAL_SRC_GRP_CLEAR,
IPAHAL_FULL_PIPELINE_CLEAR
};
/*
* struct ipahal_imm_cmd_register_write - REGISTER_WRITE cmd payload
* Write value to register. Allows reg changes to be synced with data packet
* and other immediate commands. Can be used to access the sram
* @offset: offset from IPA base address - Lower 16bit of the IPA reg addr
* @value: value to write to register
* @value_mask: mask specifying which value bits to write to the register
* @skip_pipeline_clear: if to skip pipeline clear waiting (don't wait)
* @pipeline_clear_option: options for pipeline clear waiting
*/
struct ipahal_imm_cmd_register_write {
u32 offset;
u32 value;
u32 value_mask;
bool skip_pipeline_clear;
enum ipahal_pipeline_clear_option pipeline_clear_options;
};
/*
* struct ipahal_imm_cmd_dma_shared_mem - DMA_SHARED_MEM cmd payload
* Perform mem copy into or out of the SW area of IPA local mem
* @size: Size in bytes of data to copy. Expected size is up to 2K bytes
* @local_addr: Address in IPA local memory
* @is_read: Read operation from local memory? If not, then write.
* @skip_pipeline_clear: if to skip pipeline clear waiting (don't wait)
* @pipeline_clear_option: options for pipeline clear waiting
* @system_addr: Address in system memory
*/
struct ipahal_imm_cmd_dma_shared_mem {
u32 size;
u32 local_addr;
bool is_read;
bool skip_pipeline_clear;
enum ipahal_pipeline_clear_option pipeline_clear_options;
u64 system_addr;
};
/*
* struct ipahal_imm_cmd_ip_packet_tag_status - IP_PACKET_TAG_STATUS cmd payload
* This cmd is used for to allow SW to track HW processing by setting a TAG
* value that is passed back to SW inside Packet Status information.
* TAG info will be provided as part of Packet Status info generated for
* the next pkt transferred over the pipe.
* This immediate command must be followed by a packet in the same transfer.
* @tag: Tag that is provided back to SW
*/
struct ipahal_imm_cmd_ip_packet_tag_status {
u64 tag;
};
/*
* struct ipahal_imm_cmd_dma_task_32b_addr - IPA_DMA_TASK_32B_ADDR cmd payload
* Used by clients using 32bit addresses. Used to perform DMA operation on
* multiple descriptors.
* The Opcode is dynamic, where it holds the number of buffer to process
* @cmplt: Complete flag: If true, IPA interrupt SW when the entire
* DMA related data was completely xfered to its destination.
* @eof: Enf Of Frame flag: If true, IPA assert the EOT to the
* dest client. This is used used for aggr sequence
* @flsh: Flush flag: If true pkt will go through the IPA blocks but
* will not be xfered to dest client but rather will be discarded
* @lock: Lock pipe flag: If true, IPA will stop processing descriptors
* from other EPs in the same src grp (RX queue)
* @unlock: Unlock pipe flag: If true, IPA will stop exclusively
* servicing current EP out of the src EPs of the grp (RX queue)
* @size1: Size of buffer1 data
* @addr1: Pointer to buffer1 data
* @packet_size: Total packet size. If a pkt send using multiple DMA_TASKs,
* only the first one needs to have this field set. It will be ignored
* in subsequent DMA_TASKs until the packet ends (EOT). First DMA_TASK
* must contain this field (2 or more buffers) or EOT.
*/
struct ipahal_imm_cmd_dma_task_32b_addr {
bool cmplt;
bool eof;
bool flsh;
bool lock;
bool unlock;
u32 size1;
u32 addr1;
u32 packet_size;
};
/*
* struct ipahal_imm_cmd_pyld - Immediate cmd payload information
* @len: length of the buffer
* @data: buffer contains the immediate command payload. Buffer goes
* back to back with this structure
*/
struct ipahal_imm_cmd_pyld {
u16 len;
u8 data[0];
};
/* Immediate command Function APIs */
/*
* ipahal_imm_cmd_name_str() - returns string that represent the imm cmd
* @cmd_name: [in] Immediate command name
*/
const char *ipahal_imm_cmd_name_str(enum ipahal_imm_cmd_name cmd_name);
/*
* ipahal_imm_cmd_get_opcode() - Get the fixed opcode of the immediate command
*/
u16 ipahal_imm_cmd_get_opcode(enum ipahal_imm_cmd_name cmd);
/*
* ipahal_imm_cmd_get_opcode_param() - Get the opcode of an immediate command
* that supports dynamic opcode
* Some commands opcode are not totaly fixed, but part of it is
* a supplied parameter. E.g. Low-Byte is fixed and Hi-Byte
* is a given parameter.
* This API will return the composed opcode of the command given
* the parameter
* Note: Use this API only for immediate comamnds that support Dynamic Opcode
*/
u16 ipahal_imm_cmd_get_opcode_param(enum ipahal_imm_cmd_name cmd, int param);
/*
* ipahal_construct_imm_cmd() - Construct immdiate command
* This function builds imm cmd bulk that can be be sent to IPA
* The command will be allocated dynamically.
* After done using it, call ipahal_destroy_imm_cmd() to release it
*/
struct ipahal_imm_cmd_pyld *ipahal_construct_imm_cmd(
enum ipahal_imm_cmd_name cmd, const void *params, bool is_atomic_ctx);
/*
* ipahal_construct_nop_imm_cmd() - Construct immediate comamnd for NO-Op
* Core driver may want functionality to inject NOP commands to IPA
* to ensure e.g., PIPLINE clear before someother operation.
* The functionality given by this function can be reached by
* ipahal_construct_imm_cmd(). This function is helper to the core driver
* to reach this NOP functionlity easily.
* @skip_pipline_clear: if to skip pipeline clear waiting (don't wait)
* @pipline_clr_opt: options for pipeline clear waiting
* @is_atomic_ctx: is called in atomic context or can sleep?
*/
struct ipahal_imm_cmd_pyld *ipahal_construct_nop_imm_cmd(
bool skip_pipline_clear,
enum ipahal_pipeline_clear_option pipline_clr_opt,
bool is_atomic_ctx);
/*
* ipahal_destroy_imm_cmd() - Destroy/Release bulk that was built
* by the construction functions
*/
static inline void ipahal_destroy_imm_cmd(struct ipahal_imm_cmd_pyld *pyld)
{
kfree(pyld);
}
/* IPA Status packet Structures and Function APIs */
/*
* enum ipahal_pkt_status_opcode - Packet Status Opcode
* @IPAHAL_STATUS_OPCODE_PACKET_2ND_PASS: Packet Status generated as part of
* IPA second processing pass for a packet (i.e. IPA XLAT processing for
* the translated packet).
*/
enum ipahal_pkt_status_opcode {
IPAHAL_PKT_STATUS_OPCODE_PACKET = 0,
IPAHAL_PKT_STATUS_OPCODE_NEW_FRAG_RULE,
IPAHAL_PKT_STATUS_OPCODE_DROPPED_PACKET,
IPAHAL_PKT_STATUS_OPCODE_SUSPENDED_PACKET,
IPAHAL_PKT_STATUS_OPCODE_LOG,
IPAHAL_PKT_STATUS_OPCODE_DCMP,
IPAHAL_PKT_STATUS_OPCODE_PACKET_2ND_PASS,
};
/*
* enum ipahal_pkt_status_exception - Packet Status exception type
* @IPAHAL_PKT_STATUS_EXCEPTION_PACKET_LENGTH: formerly IHL exception.
*
* Note: IPTYPE, PACKET_LENGTH and PACKET_THRESHOLD exceptions means that
* partial / no IP processing took place and corresponding Status Mask
* fields should be ignored. Flt and rt info is not valid.
*
* NOTE:: Any change to this enum, need to change to
* ipahal_pkt_status_exception_to_str array as well.
*/
enum ipahal_pkt_status_exception {
IPAHAL_PKT_STATUS_EXCEPTION_NONE = 0,
IPAHAL_PKT_STATUS_EXCEPTION_DEAGGR,
IPAHAL_PKT_STATUS_EXCEPTION_IPTYPE,
IPAHAL_PKT_STATUS_EXCEPTION_PACKET_LENGTH,
IPAHAL_PKT_STATUS_EXCEPTION_PACKET_THRESHOLD,
IPAHAL_PKT_STATUS_EXCEPTION_FRAG_RULE_MISS,
IPAHAL_PKT_STATUS_EXCEPTION_SW_FILT,
IPAHAL_PKT_STATUS_EXCEPTION_NAT,
IPAHAL_PKT_STATUS_EXCEPTION_MAX,
};
/*
* enum ipahal_pkt_status_mask - Packet Status bitmask shift values of
* the contained flags. This bitmask indicates flags on the properties of
* the packet as well as IPA processing it may had.
* @FRAG_PROCESS: Frag block processing flag: Was pkt processed by frag block?
* Also means the frag info is valid unless exception or first frag
* @FILT_PROCESS: Flt block processing flag: Was pkt processed by flt block?
* Also means that flt info is valid.
* @NAT_PROCESS: NAT block processing flag: Was pkt processed by NAT block?
* Also means that NAT info is valid, unless exception.
* @ROUTE_PROCESS: Rt block processing flag: Was pkt processed by rt block?
* Also means that rt info is valid, unless exception.
* @TAG_VALID: Flag specifying if TAG and TAG info valid?
* @FRAGMENT: Flag specifying if pkt is IP fragment.
* @FIRST_FRAGMENT: Flag specifying if pkt is first fragment. In this case, frag
* info is invalid
* @V4: Flag specifying pkt is IPv4 or IPv6
* @CKSUM_PROCESS: CSUM block processing flag: Was pkt processed by csum block?
* If so, csum trailer exists
* @AGGR_PROCESS: Aggr block processing flag: Was pkt processed by aggr block?
* @DEST_EOT: Flag specifying if EOT was asserted for the pkt on dest endp
* @DEAGGR_PROCESS: Deaggr block processing flag: Was pkt processed by deaggr
* block?
* @DEAGG_FIRST: Flag specifying if this is the first pkt in deaggr frame
* @SRC_EOT: Flag specifying if EOT asserted by src endp when sending the buffer
* @PREV_EOT: Flag specifying if EOT was sent just before the pkt as part of
* aggr hard-byte-limit
* @BYTE_LIMIT: Flag specifying if pkt is over a configured byte limit.
*/
enum ipahal_pkt_status_mask {
IPAHAL_PKT_STATUS_MASK_FRAG_PROCESS_SHFT = 0,
IPAHAL_PKT_STATUS_MASK_FILT_PROCESS_SHFT,
IPAHAL_PKT_STATUS_MASK_NAT_PROCESS_SHFT,
IPAHAL_PKT_STATUS_MASK_ROUTE_PROCESS_SHFT,
IPAHAL_PKT_STATUS_MASK_TAG_VALID_SHFT,
IPAHAL_PKT_STATUS_MASK_FRAGMENT_SHFT,
IPAHAL_PKT_STATUS_MASK_FIRST_FRAGMENT_SHFT,
IPAHAL_PKT_STATUS_MASK_V4_SHFT,
IPAHAL_PKT_STATUS_MASK_CKSUM_PROCESS_SHFT,
IPAHAL_PKT_STATUS_MASK_AGGR_PROCESS_SHFT,
IPAHAL_PKT_STATUS_MASK_DEST_EOT_SHFT,
IPAHAL_PKT_STATUS_MASK_DEAGGR_PROCESS_SHFT,
IPAHAL_PKT_STATUS_MASK_DEAGG_FIRST_SHFT,
IPAHAL_PKT_STATUS_MASK_SRC_EOT_SHFT,
IPAHAL_PKT_STATUS_MASK_PREV_EOT_SHFT,
IPAHAL_PKT_STATUS_MASK_BYTE_LIMIT_SHFT,
};
/*
* Returns boolean value representing a property of the a packet.
* @__flag_shft: The shift value of the flag of the status bitmask of
* @__status: Pointer to abstracrted status structure
* the needed property. See enum ipahal_pkt_status_mask
*/
#define IPAHAL_PKT_STATUS_MASK_FLAG_VAL(__flag_shft, __status) \
(((__status)->status_mask) & ((u32)0x1<<(__flag_shft)) ? true : false)
/*
* enum ipahal_pkt_status_nat_type - Type of NAT
*/
enum ipahal_pkt_status_nat_type {
IPAHAL_PKT_STATUS_NAT_NONE,
IPAHAL_PKT_STATUS_NAT_SRC,
IPAHAL_PKT_STATUS_NAT_DST,
};
/*
* struct ipahal_pkt_status - IPA status packet abstracted payload.
* This structure describes the status packet fields for the
* following statuses: IPA_STATUS_PACKET, IPA_STATUS_DROPPED_PACKET,
* IPA_STATUS_SUSPENDED_PACKET.
* Other statuses types has different status packet structure.
* @status_opcode: The Type of the status (Opcode).
* @exception: The first exception that took place.
* In case of exception, src endp and pkt len are always valid.
* @status_mask: Bit mask for flags on several properties on the packet
* and processing it may passed at IPA. See enum ipahal_pkt_status_mask
* @pkt_len: Pkt pyld len including hdr and retained hdr if used. Does
* not include padding or checksum trailer len.
* @endp_src_idx: Source end point index.
* @endp_dest_idx: Destination end point index.
* Not valid in case of exception
* @metadata: meta data value used by packet
* @flt_local: Filter table location flag: Does matching flt rule belongs to
* flt tbl that resides in lcl memory? (if not, then system mem)
* @flt_hash: Filter hash hit flag: Does matching flt rule was in hash tbl?
* @flt_global: Global filter rule flag: Does matching flt rule belongs to
* the global flt tbl? (if not, then the per endp tables)
* @flt_ret_hdr: Retain header in filter rule flag: Does matching flt rule
* specifies to retain header?
* @flt_miss: Filtering miss flag: Was their a filtering rule miss?
* In case of miss, all flt info to be ignored
* @flt_rule_id: The ID of the matching filter rule (if no miss).
* This info can be combined with endp_src_idx to locate the exact rule.
* @rt_local: Route table location flag: Does matching rt rule belongs to
* rt tbl that resides in lcl memory? (if not, then system mem)
* @rt_hash: Route hash hit flag: Does matching rt rule was in hash tbl?
* @ucp: UC Processing flag
* @rt_tbl_idx: Index of rt tbl that contains the rule on which was a match
* @rt_miss: Routing miss flag: Was their a routing rule miss?
* @rt_rule_id: The ID of the matching rt rule. (if no miss). This info
* can be combined with rt_tbl_idx to locate the exact rule.
* @nat_hit: NAT hit flag: Was their NAT hit?
* @nat_entry_idx: Index of the NAT entry used of NAT processing
* @nat_type: Defines the type of the NAT operation:
* @tag_info: S/W defined value provided via immediate command
* @seq_num: Per source endp unique packet sequence number
* @time_of_day_ctr: running counter from IPA clock
* @hdr_local: Header table location flag: In header insertion, was the header
* taken from the table resides in local memory? (If no, then system mem)
* @hdr_offset: Offset of used header in the header table
* @frag_hit: Frag hit flag: Was their frag rule hit in H/W frag table?
* @frag_rule: Frag rule index in H/W frag table in case of frag hit
*/
struct ipahal_pkt_status {
enum ipahal_pkt_status_opcode status_opcode;
enum ipahal_pkt_status_exception exception;
u32 status_mask;
u32 pkt_len;
u8 endp_src_idx;
u8 endp_dest_idx;
u32 metadata;
bool flt_local;
bool flt_hash;
bool flt_global;
bool flt_ret_hdr;
bool flt_miss;
u16 flt_rule_id;
bool rt_local;
bool rt_hash;
bool ucp;
u8 rt_tbl_idx;
bool rt_miss;
u16 rt_rule_id;
bool nat_hit;
u16 nat_entry_idx;
enum ipahal_pkt_status_nat_type nat_type;
u64 tag_info;
u8 seq_num;
u32 time_of_day_ctr;
bool hdr_local;
u16 hdr_offset;
bool frag_hit;
u8 frag_rule;
};
/*
* ipahal_pkt_status_get_size() - Get H/W size of packet status
*/
u32 ipahal_pkt_status_get_size(void);
/*
* ipahal_pkt_status_parse() - Parse Packet Status payload to abstracted form
* @unparsed_status: Pointer to H/W format of the packet status as read from H/W
* @status: Pointer to pre-allocated buffer where the parsed info will be stored
*/
void ipahal_pkt_status_parse(const void *unparsed_status,
struct ipahal_pkt_status *status);
/*
* ipahal_pkt_status_exception_str() - returns string represents exception type
* @exception: [in] The exception type
*/
const char *ipahal_pkt_status_exception_str(
enum ipahal_pkt_status_exception exception);
/*
* ipahal_cp_hdr_to_hw_buff() - copy header to hardware buffer according to
* base address and offset given.
* @base: dma base address
* @offset: offset from base address where the data will be copied
* @hdr: the header to be copied
* @hdr_len: the length of the header
*/
void ipahal_cp_hdr_to_hw_buff(void *base, u32 offset, u8 *hdr, u32 hdr_len);
/*
* ipahal_cp_proc_ctx_to_hw_buff() - copy processing context to
* base address and offset given.
* @type: type of header processing context
* @base: dma base address
* @offset: offset from base address where the data will be copied
* @hdr_len: the length of the header
* @is_hdr_proc_ctx: header is located in phys_base (true) or hdr_base_addr
* @phys_base: memory location in DDR
* @hdr_base_addr: base address in table
* @offset_entry: offset from hdr_base_addr in table
*/
int ipahal_cp_proc_ctx_to_hw_buff(enum ipa_hdr_proc_type type,
void *base, u32 offset, u32 hdr_len,
bool is_hdr_proc_ctx, dma_addr_t phys_base,
u32 hdr_base_addr,
struct ipa_hdr_offset_entry *offset_entry);
/*
* ipahal_get_proc_ctx_needed_len() - calculates the needed length for addition
* of header processing context according to the type of processing context
* @type: header processing context type (no processing context,
* IPA_HDR_PROC_ETHII_TO_ETHII etc.)
*/
int ipahal_get_proc_ctx_needed_len(enum ipa_hdr_proc_type type);
int ipahal_init(enum ipa_hw_type ipa_hw_type, void __iomem *base,
struct device *ipa_pdev);
void ipahal_destroy(void);
void ipahal_free_dma_mem(struct ipa_mem_buffer *mem);
#endif /* _IPAHAL_H_ */

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,288 @@
/* Copyright (c) 2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _IPAHAL_FLTRT_H_
#define _IPAHAL_FLTRT_H_
/*
* struct ipahal_fltrt_alloc_imgs_params - Params for tbls imgs allocations
* The allocation logic will allocate DMA memory representing the header.
* If the bodies are local (SRAM) the allocation will allocate
* a DMA buffers that would contain the content of these local tables in raw
* @ipt: IP version type
* @tbls_num: Number of tables to represent by the header
* @num_lcl_hash_tbls: Number of local (sram) hashable tables
* @num_lcl_nhash_tbls: Number of local (sram) non-hashable tables
* @total_sz_lcl_hash_tbls: Total size of local hashable tables
* @total_sz_lcl_nhash_tbls: Total size of local non-hashable tables
* @hash_hdr/nhash_hdr: OUT params for the header structures
* @hash_bdy/nhash_bdy: OUT params for the local body structures
*/
struct ipahal_fltrt_alloc_imgs_params {
enum ipa_ip_type ipt;
u32 tbls_num;
u32 num_lcl_hash_tbls;
u32 num_lcl_nhash_tbls;
u32 total_sz_lcl_hash_tbls;
u32 total_sz_lcl_nhash_tbls;
/* OUT PARAMS */
struct ipa_mem_buffer hash_hdr;
struct ipa_mem_buffer nhash_hdr;
struct ipa_mem_buffer hash_bdy;
struct ipa_mem_buffer nhash_bdy;
};
/*
* enum ipahal_rt_rule_hdr_type - Header type used in rt rules
* @IPAHAL_RT_RULE_HDR_NONE: No header is used
* @IPAHAL_RT_RULE_HDR_RAW: Raw header is used
* @IPAHAL_RT_RULE_HDR_PROC_CTX: Header Processing context is used
*/
enum ipahal_rt_rule_hdr_type {
IPAHAL_RT_RULE_HDR_NONE,
IPAHAL_RT_RULE_HDR_RAW,
IPAHAL_RT_RULE_HDR_PROC_CTX,
};
/*
* struct ipahal_rt_rule_gen_params - Params for generating rt rule
* @ipt: IP family version
* @dst_pipe_idx: Destination pipe index
* @hdr_type: Header type to be used
* @hdr_lcl: Does header on local or system table?
* @hdr_ofst: Offset of the header in the header table
* @priority: Rule priority
* @id: Rule ID
* @rule: Rule info
*/
struct ipahal_rt_rule_gen_params {
enum ipa_ip_type ipt;
int dst_pipe_idx;
enum ipahal_rt_rule_hdr_type hdr_type;
bool hdr_lcl;
u32 hdr_ofst;
u32 priority;
u32 id;
const struct ipa_rt_rule *rule;
};
/*
* struct ipahal_rt_rule_entry - Rt rule info parsed from H/W
* @dst_pipe_idx: Destination pipe index
* @hdr_lcl: Does the references header located in sram or system mem?
* @hdr_ofst: Offset of the header in the header table
* @hdr_type: Header type to be used
* @priority: Rule priority
* @retain_hdr: to retain the removed header in header removal
* @id: Rule ID
* @eq_attrib: Equations and their params in the rule
* @rule_size: Rule size in memory
*/
struct ipahal_rt_rule_entry {
int dst_pipe_idx;
bool hdr_lcl;
u32 hdr_ofst;
enum ipahal_rt_rule_hdr_type hdr_type;
u32 priority;
bool retain_hdr;
u32 id;
struct ipa_ipfltri_rule_eq eq_attrib;
u32 rule_size;
};
/*
* struct ipahal_flt_rule_gen_params - Params for generating flt rule
* @ipt: IP family version
* @rt_tbl_idx: Routing table the rule pointing to
* @priority: Rule priority
* @id: Rule ID
* @rule: Rule info
*/
struct ipahal_flt_rule_gen_params {
enum ipa_ip_type ipt;
u32 rt_tbl_idx;
u32 priority;
u32 id;
const struct ipa_flt_rule *rule;
};
/*
* struct ipahal_flt_rule_entry - Flt rule info parsed from H/W
* @rule: Rule info
* @priority: Rule priority
* @id: Rule ID
* @rule_size: Rule size in memory
*/
struct ipahal_flt_rule_entry {
struct ipa_flt_rule rule;
u32 priority;
u32 id;
u32 rule_size;
};
/* Get the H/W table (flt/rt) header width */
u32 ipahal_get_hw_tbl_hdr_width(void);
/* Get the H/W local table (SRAM) address alignment
* Tables headers references to local tables via offsets in SRAM
* This function return the alignment of the offset that IPA expects
*/
u32 ipahal_get_lcl_tbl_addr_alignment(void);
/*
* Rule priority is used to distinguish rules order
* at the integrated table consisting from hashable and
* non-hashable tables. Max priority are rules that once are
* scanned by IPA, IPA will not look for further rules and use it.
*/
int ipahal_get_rule_max_priority(void);
/* Given a priority, calc and return the next lower one if it is in
* legal range.
*/
int ipahal_rule_decrease_priority(int *prio);
/* Does the given ID represents rule miss? */
bool ipahal_is_rule_miss_id(u32 id);
/* Get rule ID with high bit only asserted
* Used e.g. to create groups of IDs according to this bit
*/
u32 ipahal_get_rule_id_hi_bit(void);
/* Get the low value possible to be used for rule-id */
u32 ipahal_get_low_rule_id(void);
/*
* ipahal_rt_generate_empty_img() - Generate empty route image
* Creates routing header buffer for the given tables number.
* For each table, make it point to the empty table on DDR.
* @tbls_num: Number of tables. For each will have an entry in the header
* @hash_hdr_size: SRAM buf size of the hash tbls hdr. Used for space check
* @nhash_hdr_size: SRAM buf size of the nhash tbls hdr. Used for space check
* @mem: mem object that points to DMA mem representing the hdr structure
*/
int ipahal_rt_generate_empty_img(u32 tbls_num, u32 hash_hdr_size,
u32 nhash_hdr_size, struct ipa_mem_buffer *mem);
/*
* ipahal_flt_generate_empty_img() - Generate empty filter image
* Creates filter header buffer for the given tables number.
* For each table, make it point to the empty table on DDR.
* @tbls_num: Number of tables. For each will have an entry in the header
* @hash_hdr_size: SRAM buf size of the hash tbls hdr. Used for space check
* @nhash_hdr_size: SRAM buf size of the nhash tbls hdr. Used for space check
* @ep_bitmap: Bitmap representing the EP that has flt tables. The format
* should be: bit0->EP0, bit1->EP1
* @mem: mem object that points to DMA mem representing the hdr structure
*/
int ipahal_flt_generate_empty_img(u32 tbls_num, u32 hash_hdr_size,
u32 nhash_hdr_size, u64 ep_bitmap, struct ipa_mem_buffer *mem);
/*
* ipahal_fltrt_allocate_hw_tbl_imgs() - Allocate tbl images DMA structures
* Used usually during commit.
* Allocates header structures and init them to point to empty DDR table
* Allocate body strucutres for local bodies tables
* @params: Parameters for IN and OUT regard the allocation.
*/
int ipahal_fltrt_allocate_hw_tbl_imgs(
struct ipahal_fltrt_alloc_imgs_params *params);
/*
* ipahal_fltrt_allocate_hw_sys_tbl() - Allocate DMA mem for H/W flt/rt sys tbl
* @tbl_mem: IN/OUT param. size for effective table size. Pointer, for the
* allocated memory.
*
* The size is adapted for needed alignments/borders.
*/
int ipahal_fltrt_allocate_hw_sys_tbl(struct ipa_mem_buffer *tbl_mem);
/*
* ipahal_fltrt_write_addr_to_hdr() - Fill table header with table address
* Given table addr/offset, adapt it to IPA H/W format and write it
* to given header index.
* @addr: Address or offset to be used
* @hdr_base: base address of header structure to write the address
* @hdr_idx: index of the address in the header structure
* @is_sys: Is it system address or local offset
*/
int ipahal_fltrt_write_addr_to_hdr(u64 addr, void *hdr_base, u32 hdr_idx,
bool is_sys);
/*
* ipahal_fltrt_read_addr_from_hdr() - Given sram address, read it's
* content (physical address or offset) and parse it.
* @hdr_base: base sram address of the header structure.
* @hdr_idx: index of the header entry line in the header structure.
* @addr: The parsed address - Out parameter
* @is_sys: Is this system or local address - Out parameter
*/
int ipahal_fltrt_read_addr_from_hdr(void *hdr_base, u32 hdr_idx, u64 *addr,
bool *is_sys);
/*
* ipahal_rt_generate_hw_rule() - generates the routing hardware rule.
* @params: Params for the rule creation.
* @hw_len: Size of the H/W rule to be returned
* @buf: Buffer to build the rule in. If buf is NULL, then the rule will
* be built in internal temp buf. This is used e.g. to get the rule size
* only.
*/
int ipahal_rt_generate_hw_rule(struct ipahal_rt_rule_gen_params *params,
u32 *hw_len, u8 *buf);
/*
* ipahal_flt_generate_hw_rule() - generates the filtering hardware rule.
* @params: Params for the rule creation.
* @hw_len: Size of the H/W rule to be returned
* @buf: Buffer to build the rule in. If buf is NULL, then the rule will
* be built in internal temp buf. This is used e.g. to get the rule size
* only.
*/
int ipahal_flt_generate_hw_rule(struct ipahal_flt_rule_gen_params *params,
u32 *hw_len, u8 *buf);
/*
* ipahal_flt_generate_equation() - generate flt rule in equation form
* Will build equation form flt rule from given info.
* @ipt: IP family
* @attrib: Rule attribute to be generated
* @eq_atrb: Equation form generated rule
* Note: Usage example: Pass the generated form to other sub-systems
* for inter-subsystems rules exchange.
*/
int ipahal_flt_generate_equation(enum ipa_ip_type ipt,
const struct ipa_rule_attrib *attrib,
struct ipa_ipfltri_rule_eq *eq_atrb);
/*
* ipahal_rt_parse_hw_rule() - Parse H/W formated rt rule
* Given the rule address, read the rule info from H/W and parse it.
* @rule_addr: Rule address (virtual memory)
* @rule: Out parameter for parsed rule info
*/
int ipahal_rt_parse_hw_rule(u8 *rule_addr,
struct ipahal_rt_rule_entry *rule);
/*
* ipahal_flt_parse_hw_rule() - Parse H/W formated flt rule
* Given the rule address, read the rule info from H/W and parse it.
* @rule_addr: Rule address (virtual memory)
* @rule: Out parameter for parsed rule info
*/
int ipahal_flt_parse_hw_rule(u8 *rule_addr,
struct ipahal_flt_rule_entry *rule);
#endif /* _IPAHAL_FLTRT_H_ */

View File

@@ -0,0 +1,143 @@
/* Copyright (c) 2012-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _IPAHAL_FLTRT_I_H_
#define _IPAHAL_FLTRT_I_H_
/*
* enum ipa_fltrt_equations - RULE equations
* These are names values to the equations that can be used
* The HAL layer holds mapping between these names and H/W
* presentation.
*/
enum ipa_fltrt_equations {
IPA_TOS_EQ,
IPA_PROTOCOL_EQ,
IPA_TC_EQ,
IPA_OFFSET_MEQ128_0,
IPA_OFFSET_MEQ128_1,
IPA_OFFSET_MEQ32_0,
IPA_OFFSET_MEQ32_1,
IPA_IHL_OFFSET_MEQ32_0,
IPA_IHL_OFFSET_MEQ32_1,
IPA_METADATA_COMPARE,
IPA_IHL_OFFSET_RANGE16_0,
IPA_IHL_OFFSET_RANGE16_1,
IPA_IHL_OFFSET_EQ_32,
IPA_IHL_OFFSET_EQ_16,
IPA_FL_EQ,
IPA_IS_FRAG,
IPA_EQ_MAX,
};
/* Width and Alignment values for H/W structures.
* Specific for IPA version.
*/
#define IPA3_0_HW_TBL_SYSADDR_ALIGNMENT (127)
#define IPA3_0_HW_TBL_LCLADDR_ALIGNMENT (7)
#define IPA3_0_HW_TBL_BLK_SIZE_ALIGNMENT (127)
#define IPA3_0_HW_TBL_WIDTH (8)
#define IPA3_0_HW_TBL_HDR_WIDTH (8)
#define IPA3_0_HW_TBL_ADDR_MASK (127)
#define IPA3_0_HW_RULE_BUF_SIZE (256)
#define IPA3_0_HW_RULE_START_ALIGNMENT (7)
/*
* Rules Priority.
* Needed due to rules classification to hashable and non-hashable.
* Higher priority is lower in number. i.e. 0 is highest priority
*/
#define IPA3_0_RULE_MAX_PRIORITY (0)
#define IPA3_0_RULE_MIN_PRIORITY (1023)
/*
* RULE ID, bit length (e.g. 10 bits).
*/
#define IPA3_0_RULE_ID_BIT_LEN (10)
#define IPA3_0_LOW_RULE_ID (1)
/**
* struct ipa3_0_rt_rule_hw_hdr - HW header of IPA routing rule
* @word: routing rule header properties
* @en_rule: enable rule - Equation bit fields
* @pipe_dest_idx: destination pipe index
* @system: Is referenced header is lcl or sys memory
* @hdr_offset: header offset
* @proc_ctx: whether hdr_offset points to header table or to
* header processing context table
* @priority: Rule priority. Added to distinguish rules order
* at the integrated table consisting from hashable and
* non-hashable parts
* @rsvd1: reserved bits
* @retain_hdr: added to add back to the packet the header removed
* as part of header removal. This will be done as part of
* header insertion block.
* @rule_id: rule ID that will be returned in the packet status
* @rsvd2: reserved bits
*/
struct ipa3_0_rt_rule_hw_hdr {
union {
u64 word;
struct {
u64 en_rule:16;
u64 pipe_dest_idx:5;
u64 system:1;
u64 hdr_offset:9;
u64 proc_ctx:1;
u64 priority:10;
u64 rsvd1:5;
u64 retain_hdr:1;
u64 rule_id:10;
u64 rsvd2:6;
} hdr;
} u;
};
/**
* struct ipa3_0_flt_rule_hw_hdr - HW header of IPA filter rule
* @word: filtering rule properties
* @en_rule: enable rule
* @action: post filtering action
* @rt_tbl_idx: index in routing table
* @retain_hdr: added to add back to the packet the header removed
* as part of header removal. This will be done as part of
* header insertion block.
* @rsvd1: reserved bits
* @priority: Rule priority. Added to distinguish rules order
* at the integrated table consisting from hashable and
* non-hashable parts
* @rsvd2: reserved bits
* @rule_id: rule ID that will be returned in the packet status
* @rsvd3: reserved bits
*/
struct ipa3_0_flt_rule_hw_hdr {
union {
u64 word;
struct {
u64 en_rule:16;
u64 action:5;
u64 rt_tbl_idx:5;
u64 retain_hdr:1;
u64 rsvd1:5;
u64 priority:10;
u64 rsvd2:6;
u64 rule_id:10;
u64 rsvd3:6;
} hdr;
} u;
};
int ipahal_fltrt_init(enum ipa_hw_type ipa_hw_type);
void ipahal_fltrt_destroy(void);
#endif /* _IPAHAL_FLTRT_I_H_ */

View File

@@ -0,0 +1,549 @@
/* Copyright (c) 2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _IPAHAL_I_H_
#define _IPAHAL_I_H_
#include <linux/ipa.h>
#include "../../ipa_common_i.h"
#define IPAHAL_DRV_NAME "ipahal"
#define IPAHAL_DBG(fmt, args...) \
do { \
pr_debug(IPAHAL_DRV_NAME " %s:%d " fmt, __func__, __LINE__, \
## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf(), \
IPAHAL_DRV_NAME " %s:%d " fmt, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
IPAHAL_DRV_NAME " %s:%d " fmt, ## args); \
} while (0)
#define IPAHAL_DBG_LOW(fmt, args...) \
do { \
pr_debug(IPAHAL_DRV_NAME " %s:%d " fmt, __func__, __LINE__, \
## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
IPAHAL_DRV_NAME " %s:%d " fmt, ## args); \
} while (0)
#define IPAHAL_ERR(fmt, args...) \
do { \
pr_err(IPAHAL_DRV_NAME " %s:%d " fmt, __func__, __LINE__, \
## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf(), \
IPAHAL_DRV_NAME " %s:%d " fmt, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
IPAHAL_DRV_NAME " %s:%d " fmt, ## args); \
} while (0)
/*
* struct ipahal_context - HAL global context data
* @hw_type: IPA H/W type/version.
* @base: Base address to be used for accessing IPA memory. This is
* I/O memory mapped address.
* Controlled by debugfs. default is off
* @dent: Debugfs folder dir entry
* @ipa_pdev: IPA Platform Device. Will be used for DMA memory
* @empty_fltrt_tbl: Empty table to be used at tables init.
*/
struct ipahal_context {
enum ipa_hw_type hw_type;
void __iomem *base;
struct dentry *dent;
struct device *ipa_pdev;
struct ipa_mem_buffer empty_fltrt_tbl;
};
extern struct ipahal_context *ipahal_ctx;
/* Immediate commands H/W structures */
/*
* struct ipa_imm_cmd_hw_ip_v4_filter_init - IP_V4_FILTER_INIT command payload
* in H/W format.
* Inits IPv4 filter block.
* @hash_rules_addr: Addr in system mem where ipv4 hashable flt rules starts
* @hash_rules_size: Size in bytes of the hashable tbl to cpy to local mem
* @hash_local_addr: Addr in shared mem where ipv4 hashable flt tbl should
* be copied to
* @nhash_rules_size: Size in bytes of the non-hashable tbl to cpy to local mem
* @nhash_local_addr: Addr in shared mem where ipv4 non-hashable flt tbl should
* be copied to
* @rsvd: reserved
* @nhash_rules_addr: Addr in sys mem where ipv4 non-hashable flt tbl starts
*/
struct ipa_imm_cmd_hw_ip_v4_filter_init {
u64 hash_rules_addr:64;
u64 hash_rules_size:12;
u64 hash_local_addr:16;
u64 nhash_rules_size:12;
u64 nhash_local_addr:16;
u64 rsvd:8;
u64 nhash_rules_addr:64;
};
/*
* struct ipa_imm_cmd_hw_ip_v6_filter_init - IP_V6_FILTER_INIT command payload
* in H/W format.
* Inits IPv6 filter block.
* @hash_rules_addr: Addr in system mem where ipv6 hashable flt rules starts
* @hash_rules_size: Size in bytes of the hashable tbl to cpy to local mem
* @hash_local_addr: Addr in shared mem where ipv6 hashable flt tbl should
* be copied to
* @nhash_rules_size: Size in bytes of the non-hashable tbl to cpy to local mem
* @nhash_local_addr: Addr in shared mem where ipv6 non-hashable flt tbl should
* be copied to
* @rsvd: reserved
* @nhash_rules_addr: Addr in sys mem where ipv6 non-hashable flt tbl starts
*/
struct ipa_imm_cmd_hw_ip_v6_filter_init {
u64 hash_rules_addr:64;
u64 hash_rules_size:12;
u64 hash_local_addr:16;
u64 nhash_rules_size:12;
u64 nhash_local_addr:16;
u64 rsvd:8;
u64 nhash_rules_addr:64;
};
/*
* struct ipa_imm_cmd_hw_ip_v4_nat_init - IP_V4_NAT_INIT command payload
* in H/W format.
* Inits IPv4 NAT block. Initiate NAT table with it dimensions, location
* cache address abd itger related parameters.
* @ipv4_rules_addr: Addr in sys/shared mem where ipv4 NAT rules start
* @ipv4_expansion_rules_addr: Addr in sys/shared mem where expantion NAT
* table starts. IPv4 NAT rules that result in NAT collision are located
* in this table.
* @index_table_addr: Addr in sys/shared mem where index table, which points
* to NAT table starts
* @index_table_expansion_addr: Addr in sys/shared mem where expansion index
* table starts
* @table_index: For future support of multiple NAT tables
* @rsvd1: reserved
* @ipv4_rules_addr_type: ipv4_rules_addr in sys or shared mem
* @ipv4_expansion_rules_addr_type: ipv4_expansion_rules_addr in
* sys or shared mem
* @index_table_addr_type: index_table_addr in sys or shared mem
* @index_table_expansion_addr_type: index_table_expansion_addr in
* sys or shared mem
* @size_base_tables: Num of entries in NAT tbl and idx tbl (each)
* @size_expansion_tables: Num of entries in NAT expantion tbl and expantion
* idx tbl (each)
* @rsvd2: reserved
* @public_ip_addr: public IP address
*/
struct ipa_imm_cmd_hw_ip_v4_nat_init {
u64 ipv4_rules_addr:64;
u64 ipv4_expansion_rules_addr:64;
u64 index_table_addr:64;
u64 index_table_expansion_addr:64;
u64 table_index:3;
u64 rsvd1:1;
u64 ipv4_rules_addr_type:1;
u64 ipv4_expansion_rules_addr_type:1;
u64 index_table_addr_type:1;
u64 index_table_expansion_addr_type:1;
u64 size_base_tables:12;
u64 size_expansion_tables:10;
u64 rsvd2:2;
u64 public_ip_addr:32;
};
/*
* struct ipa_imm_cmd_hw_ip_v4_routing_init - IP_V4_ROUTING_INIT command payload
* in H/W format.
* Inits IPv4 routing table/structure - with the rules and other related params
* @hash_rules_addr: Addr in system mem where ipv4 hashable rt rules starts
* @hash_rules_size: Size in bytes of the hashable tbl to cpy to local mem
* @hash_local_addr: Addr in shared mem where ipv4 hashable rt tbl should
* be copied to
* @nhash_rules_size: Size in bytes of the non-hashable tbl to cpy to local mem
* @nhash_local_addr: Addr in shared mem where ipv4 non-hashable rt tbl should
* be copied to
* @rsvd: reserved
* @nhash_rules_addr: Addr in sys mem where ipv4 non-hashable rt tbl starts
*/
struct ipa_imm_cmd_hw_ip_v4_routing_init {
u64 hash_rules_addr:64;
u64 hash_rules_size:12;
u64 hash_local_addr:16;
u64 nhash_rules_size:12;
u64 nhash_local_addr:16;
u64 rsvd:8;
u64 nhash_rules_addr:64;
};
/*
* struct ipa_imm_cmd_hw_ip_v6_routing_init - IP_V6_ROUTING_INIT command payload
* in H/W format.
* Inits IPv6 routing table/structure - with the rules and other related params
* @hash_rules_addr: Addr in system mem where ipv6 hashable rt rules starts
* @hash_rules_size: Size in bytes of the hashable tbl to cpy to local mem
* @hash_local_addr: Addr in shared mem where ipv6 hashable rt tbl should
* be copied to
* @nhash_rules_size: Size in bytes of the non-hashable tbl to cpy to local mem
* @nhash_local_addr: Addr in shared mem where ipv6 non-hashable rt tbl should
* be copied to
* @rsvd: reserved
* @nhash_rules_addr: Addr in sys mem where ipv6 non-hashable rt tbl starts
*/
struct ipa_imm_cmd_hw_ip_v6_routing_init {
u64 hash_rules_addr:64;
u64 hash_rules_size:12;
u64 hash_local_addr:16;
u64 nhash_rules_size:12;
u64 nhash_local_addr:16;
u64 rsvd:8;
u64 nhash_rules_addr:64;
};
/*
* struct ipa_imm_cmd_hw_hdr_init_local - HDR_INIT_LOCAL command payload
* in H/W format.
* Inits hdr table within local mem with the hdrs and their length.
* @hdr_table_addr: Word address in sys mem where the table starts (SRC)
* @size_hdr_table: Size of the above (in bytes)
* @hdr_addr: header address in IPA sram (used as DST for memory copy)
* @rsvd: reserved
*/
struct ipa_imm_cmd_hw_hdr_init_local {
u64 hdr_table_addr:64;
u64 size_hdr_table:12;
u64 hdr_addr:16;
u64 rsvd:4;
};
/*
* struct ipa_imm_cmd_hw_nat_dma - NAT_DMA command payload
* in H/W format
* Perform DMA operation on NAT related mem addressess. Copy data into
* different locations within NAT associated tbls. (For add/remove NAT rules)
* @table_index: NAT tbl index. Defines the NAT tbl on which to perform DMA op.
* @rsvd1: reserved
* @base_addr: Base addr to which the DMA operation should be performed.
* @rsvd2: reserved
* @offset: offset in bytes from base addr to write 'data' to
* @data: data to be written
* @rsvd3: reserved
*/
struct ipa_imm_cmd_hw_nat_dma {
u64 table_index:3;
u64 rsvd1:1;
u64 base_addr:2;
u64 rsvd2:2;
u64 offset:32;
u64 data:16;
u64 rsvd3:8;
};
/*
* struct ipa_imm_cmd_hw_hdr_init_system - HDR_INIT_SYSTEM command payload
* in H/W format.
* Inits hdr table within sys mem with the hdrs and their length.
* @hdr_table_addr: Word address in system memory where the hdrs tbl starts.
*/
struct ipa_imm_cmd_hw_hdr_init_system {
u64 hdr_table_addr:64;
};
/*
* struct ipa_imm_cmd_hw_ip_packet_init - IP_PACKET_INIT command payload
* in H/W format.
* Configuration for specific IP pkt. Shall be called prior to an IP pkt
* data. Pkt will not go through IP pkt processing.
* @destination_pipe_index: Destination pipe index (in case routing
* is enabled, this field will overwrite the rt rule)
* @rsvd: reserved
*/
struct ipa_imm_cmd_hw_ip_packet_init {
u64 destination_pipe_index:5;
u64 rsv1:59;
};
/*
* struct ipa_imm_cmd_hw_register_write - REGISTER_WRITE command payload
* in H/W format.
* Write value to register. Allows reg changes to be synced with data packet
* and other immediate command. Can be used to access the sram
* @sw_rsvd: Ignored by H/W. My be used by S/W
* @skip_pipeline_clear: 0 to wait until IPA pipeline is clear. 1 don't wait
* @offset: offset from IPA base address - Lower 16bit of the IPA reg addr
* @value: value to write to register
* @value_mask: mask specifying which value bits to write to the register
* @pipeline_clear_options: options for pipeline to clear
* 0: HPS - no pkt inside HPS (not grp specific)
* 1: source group - The immediate cmd src grp does not use any pkt ctxs
* 2: Wait until no pkt reside inside IPA pipeline
* 3: reserved
* @rsvd: reserved - should be set to zero
*/
struct ipa_imm_cmd_hw_register_write {
u64 sw_rsvd:15;
u64 skip_pipeline_clear:1;
u64 offset:16;
u64 value:32;
u64 value_mask:32;
u64 pipeline_clear_options:2;
u64 rsvd:30;
};
/*
* struct ipa_imm_cmd_hw_dma_shared_mem - DMA_SHARED_MEM command payload
* in H/W format.
* Perform mem copy into or out of the SW area of IPA local mem
* @sw_rsvd: Ignored by H/W. My be used by S/W
* @size: Size in bytes of data to copy. Expected size is up to 2K bytes
* @local_addr: Address in IPA local memory
* @direction: Read or write?
* 0: IPA write, Write to local address from system address
* 1: IPA read, Read from local address to system address
* @skip_pipeline_clear: 0 to wait until IPA pipeline is clear. 1 don't wait
* @pipeline_clear_options: options for pipeline to clear
* 0: HPS - no pkt inside HPS (not grp specific)
* 1: source group - The immediate cmd src grp does npt use any pkt ctxs
* 2: Wait until no pkt reside inside IPA pipeline
* 3: reserved
* @rsvd: reserved - should be set to zero
* @system_addr: Address in system memory
*/
struct ipa_imm_cmd_hw_dma_shared_mem {
u64 sw_rsvd:16;
u64 size:16;
u64 local_addr:16;
u64 direction:1;
u64 skip_pipeline_clear:1;
u64 pipeline_clear_options:2;
u64 rsvd:12;
u64 system_addr:64;
};
/*
* struct ipa_imm_cmd_hw_ip_packet_tag_status -
* IP_PACKET_TAG_STATUS command payload in H/W format.
* This cmd is used for to allow SW to track HW processing by setting a TAG
* value that is passed back to SW inside Packet Status information.
* TAG info will be provided as part of Packet Status info generated for
* the next pkt transferred over the pipe.
* This immediate command must be followed by a packet in the same transfer.
* @sw_rsvd: Ignored by H/W. My be used by S/W
* @tag: Tag that is provided back to SW
*/
struct ipa_imm_cmd_hw_ip_packet_tag_status {
u64 sw_rsvd:16;
u64 tag:48;
};
/*
* struct ipa_imm_cmd_hw_dma_task_32b_addr -
* IPA_DMA_TASK_32B_ADDR command payload in H/W format.
* Used by clients using 32bit addresses. Used to perform DMA operation on
* multiple descriptors.
* The Opcode is dynamic, where it holds the number of buffer to process
* @sw_rsvd: Ignored by H/W. My be used by S/W
* @cmplt: Complete flag: When asserted IPA will interrupt SW when the entire
* DMA related data was completely xfered to its destination.
* @eof: Enf Of Frame flag: When asserted IPA will assert the EOT to the
* dest client. This is used used for aggr sequence
* @flsh: Flush flag: When asserted, pkt will go through the IPA blocks but
* will not be xfered to dest client but rather will be discarded
* @lock: Lock pipe flag: When asserted, IPA will stop processing descriptors
* from other EPs in the same src grp (RX queue)
* @unlock: Unlock pipe flag: When asserted, IPA will stop exclusively
* servicing current EP out of the src EPs of the grp (RX queue)
* @size1: Size of buffer1 data
* @addr1: Pointer to buffer1 data
* @packet_size: Total packet size. If a pkt send using multiple DMA_TASKs,
* only the first one needs to have this field set. It will be ignored
* in subsequent DMA_TASKs until the packet ends (EOT). First DMA_TASK
* must contain this field (2 or more buffers) or EOT.
*/
struct ipa_imm_cmd_hw_dma_task_32b_addr {
u64 sw_rsvd:11;
u64 cmplt:1;
u64 eof:1;
u64 flsh:1;
u64 lock:1;
u64 unlock:1;
u64 size1:16;
u64 addr1:32;
u64 packet_size:16;
};
/* IPA Status packet H/W structures and info */
/*
* struct ipa_status_pkt_hw - IPA status packet payload in H/W format.
* This structure describes the status packet H/W structure for the
* following statuses: IPA_STATUS_PACKET, IPA_STATUS_DROPPED_PACKET,
* IPA_STATUS_SUSPENDED_PACKET.
* Other statuses types has different status packet structure.
* @status_opcode: The Type of the status (Opcode).
* @exception: (not bitmask) - the first exception that took place.
* In case of exception, src endp and pkt len are always valid.
* @status_mask: Bit mask specifying on which H/W blocks the pkt was processed.
* @pkt_len: Pkt pyld len including hdr, include retained hdr if used. Does
* not include padding or checksum trailer len.
* @endp_src_idx: Source end point index.
* @rsvd1: reserved
* @endp_dest_idx: Destination end point index.
* Not valid in case of exception
* @rsvd2: reserved
* @metadata: meta data value used by packet
* @flt_local: Filter table location flag: Does matching flt rule belongs to
* flt tbl that resides in lcl memory? (if not, then system mem)
* @flt_hash: Filter hash hit flag: Does matching flt rule was in hash tbl?
* @flt_global: Global filter rule flag: Does matching flt rule belongs to
* the global flt tbl? (if not, then the per endp tables)
* @flt_ret_hdr: Retain header in filter rule flag: Does matching flt rule
* specifies to retain header?
* @flt_rule_id: The ID of the matching filter rule. This info can be combined
* with endp_src_idx to locate the exact rule. ID=0x3FF reserved to specify
* flt miss. In case of miss, all flt info to be ignored
* @rt_local: Route table location flag: Does matching rt rule belongs to
* rt tbl that resides in lcl memory? (if not, then system mem)
* @rt_hash: Route hash hit flag: Does matching rt rule was in hash tbl?
* @ucp: UC Processing flag.
* @rt_tbl_idx: Index of rt tbl that contains the rule on which was a match
* @rt_rule_id: The ID of the matching rt rule. This info can be combined
* with rt_tbl_idx to locate the exact rule. ID=0x3FF reserved to specify
* rt miss. In case of miss, all rt info to be ignored
* @nat_hit: NAT hit flag: Was their NAT hit?
* @nat_entry_idx: Index of the NAT entry used of NAT processing
* @nat_type: Defines the type of the NAT operation:
* 00: No NAT
* 01: Source NAT
* 10: Destination NAT
* 11: Reserved
* @tag_info: S/W defined value provided via immediate command
* @seq_num: Per source endp unique packet sequence number
* @time_of_day_ctr: running counter from IPA clock
* @hdr_local: Header table location flag: In header insertion, was the header
* taken from the table resides in local memory? (If no, then system mem)
* @hdr_offset: Offset of used header in the header table
* @frag_hit: Frag hit flag: Was their frag rule hit in H/W frag table?
* @frag_rule: Frag rule index in H/W frag table in case of frag hit
* @hw_specific: H/W specific reserved value
*/
struct ipa_pkt_status_hw {
u64 status_opcode:8;
u64 exception:8;
u64 status_mask:16;
u64 pkt_len:16;
u64 endp_src_idx:5;
u64 rsvd1:3;
u64 endp_dest_idx:5;
u64 rsvd2:3;
u64 metadata:32;
u64 flt_local:1;
u64 flt_hash:1;
u64 flt_global:1;
u64 flt_ret_hdr:1;
u64 flt_rule_id:10;
u64 rt_local:1;
u64 rt_hash:1;
u64 ucp:1;
u64 rt_tbl_idx:5;
u64 rt_rule_id:10;
u64 nat_hit:1;
u64 nat_entry_idx:13;
u64 nat_type:2;
u64 tag_info:48;
u64 seq_num:8;
u64 time_of_day_ctr:24;
u64 hdr_local:1;
u64 hdr_offset:10;
u64 frag_hit:1;
u64 frag_rule:4;
u64 hw_specific:16;
};
/* Size of H/W Packet Status */
#define IPA3_0_PKT_STATUS_SIZE 32
/* Headers and processing context H/W structures and definitions */
/* uCP command numbers */
#define IPA_HDR_UCP_802_3_TO_802_3 6
#define IPA_HDR_UCP_802_3_TO_ETHII 7
#define IPA_HDR_UCP_ETHII_TO_802_3 8
#define IPA_HDR_UCP_ETHII_TO_ETHII 9
/* Processing context TLV type */
#define IPA_PROC_CTX_TLV_TYPE_END 0
#define IPA_PROC_CTX_TLV_TYPE_HDR_ADD 1
#define IPA_PROC_CTX_TLV_TYPE_PROC_CMD 3
/**
* struct ipa_hw_hdr_proc_ctx_tlv -
* HW structure of IPA processing context header - TLV part
* @type: 0 - end type
* 1 - header addition type
* 3 - processing command type
* @length: number of bytes after tlv
* for type:
* 0 - needs to be 0
* 1 - header addition length
* 3 - number of 32B including type and length.
* @value: specific value for type
* for type:
* 0 - needs to be 0
* 1 - header length
* 3 - command ID (see IPA_HDR_UCP_* definitions)
*/
struct ipa_hw_hdr_proc_ctx_tlv {
u32 type:8;
u32 length:8;
u32 value:16;
};
/**
* struct ipa_hw_hdr_proc_ctx_hdr_add -
* HW structure of IPA processing context - add header tlv
* @tlv: IPA processing context TLV
* @hdr_addr: processing context header address
*/
struct ipa_hw_hdr_proc_ctx_hdr_add {
struct ipa_hw_hdr_proc_ctx_tlv tlv;
u32 hdr_addr;
};
/**
* struct ipa_hw_hdr_proc_ctx_add_hdr_seq -
* IPA processing context header - add header sequence
* @hdr_add: add header command
* @end: tlv end command (cmd.type must be 0)
*/
struct ipa_hw_hdr_proc_ctx_add_hdr_seq {
struct ipa_hw_hdr_proc_ctx_hdr_add hdr_add;
struct ipa_hw_hdr_proc_ctx_tlv end;
};
/**
* struct ipa_hw_hdr_proc_ctx_add_hdr_cmd_seq -
* IPA processing context header - process command sequence
* @hdr_add: add header command
* @cmd: tlv processing command (cmd.type must be 3)
* @end: tlv end command (cmd.type must be 0)
*/
struct ipa_hw_hdr_proc_ctx_add_hdr_cmd_seq {
struct ipa_hw_hdr_proc_ctx_hdr_add hdr_add;
struct ipa_hw_hdr_proc_ctx_tlv cmd;
struct ipa_hw_hdr_proc_ctx_tlv end;
};
#endif /* _IPAHAL_I_H_ */

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,449 @@
/* Copyright (c) 2012-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _IPAHAL_REG_H_
#define _IPAHAL_REG_H_
#include <linux/ipa.h>
/*
* Registers names
*
* NOTE:: Any change to this enum, need to change to ipareg_name_to_str
* array as well.
*/
enum ipahal_reg_name {
IPA_ROUTE,
IPA_IRQ_STTS_EE_n,
IPA_IRQ_EN_EE_n,
IPA_IRQ_CLR_EE_n,
IPA_IRQ_SUSPEND_INFO_EE_n,
IPA_SUSPEND_IRQ_EN_EE_n,
IPA_SUSPEND_IRQ_CLR_EE_n,
IPA_BCR,
IPA_ENABLED_PIPES,
IPA_COMP_SW_RESET,
IPA_VERSION,
IPA_TAG_TIMER,
IPA_COMP_HW_VERSION,
IPA_SPARE_REG_1,
IPA_SPARE_REG_2,
IPA_COMP_CFG,
IPA_STATE_AGGR_ACTIVE,
IPA_ENDP_INIT_HDR_n,
IPA_ENDP_INIT_HDR_EXT_n,
IPA_ENDP_INIT_AGGR_n,
IPA_AGGR_FORCE_CLOSE,
IPA_ENDP_INIT_ROUTE_n,
IPA_ENDP_INIT_MODE_n,
IPA_ENDP_INIT_NAT_n,
IPA_ENDP_INIT_CTRL_n,
IPA_ENDP_INIT_HOL_BLOCK_EN_n,
IPA_ENDP_INIT_HOL_BLOCK_TIMER_n,
IPA_ENDP_INIT_DEAGGR_n,
IPA_ENDP_INIT_SEQ_n,
IPA_DEBUG_CNT_REG_n,
IPA_ENDP_INIT_CFG_n,
IPA_IRQ_EE_UC_n,
IPA_ENDP_INIT_HDR_METADATA_MASK_n,
IPA_ENDP_INIT_HDR_METADATA_n,
IPA_ENDP_INIT_RSRC_GRP_n,
IPA_SHARED_MEM_SIZE,
IPA_SRAM_DIRECT_ACCESS_n,
IPA_DEBUG_CNT_CTRL_n,
IPA_UC_MAILBOX_m_n,
IPA_FILT_ROUT_HASH_FLUSH,
IPA_SINGLE_NDP_MODE,
IPA_QCNCM,
IPA_SYS_PKT_PROC_CNTXT_BASE,
IPA_LOCAL_PKT_PROC_CNTXT_BASE,
IPA_ENDP_STATUS_n,
IPA_ENDP_FILTER_ROUTER_HSH_CFG_n,
IPA_SRC_RSRC_GRP_01_RSRC_TYPE_n,
IPA_SRC_RSRC_GRP_23_RSRC_TYPE_n,
IPA_SRC_RSRC_GRP_45_RSRC_TYPE_n,
IPA_SRC_RSRC_GRP_67_RSRC_TYPE_n,
IPA_DST_RSRC_GRP_01_RSRC_TYPE_n,
IPA_DST_RSRC_GRP_23_RSRC_TYPE_n,
IPA_DST_RSRC_GRP_45_RSRC_TYPE_n,
IPA_DST_RSRC_GRP_67_RSRC_TYPE_n,
IPA_RX_HPS_CLIENTS_MIN_DEPTH_0,
IPA_RX_HPS_CLIENTS_MIN_DEPTH_1,
IPA_RX_HPS_CLIENTS_MAX_DEPTH_0,
IPA_RX_HPS_CLIENTS_MAX_DEPTH_1,
IPA_QSB_MAX_WRITES,
IPA_QSB_MAX_READS,
IPA_TX_CFG,
IPA_REG_MAX,
};
/*
* struct ipahal_reg_route - IPA route register
* @route_dis: route disable
* @route_def_pipe: route default pipe
* @route_def_hdr_table: route default header table
* @route_def_hdr_ofst: route default header offset table
* @route_frag_def_pipe: Default pipe to route fragmented exception
* packets and frag new rule statues, if source pipe does not have
* a notification status pipe defined.
* @route_def_retain_hdr: default value of retain header. It is used
* when no rule was hit
*/
struct ipahal_reg_route {
u32 route_dis;
u32 route_def_pipe;
u32 route_def_hdr_table;
u32 route_def_hdr_ofst;
u8 route_frag_def_pipe;
u32 route_def_retain_hdr;
};
/*
* struct ipahal_reg_endp_init_route - IPA ENDP_INIT_ROUTE_n register
* @route_table_index: Default index of routing table (IPA Consumer).
*/
struct ipahal_reg_endp_init_route {
u32 route_table_index;
};
/*
* struct ipahal_reg_endp_init_rsrc_grp - IPA_ENDP_INIT_RSRC_GRP_n register
* @rsrc_grp: Index of group for this ENDP. If this ENDP is a source-ENDP,
* index is for source-resource-group. If destination ENPD, index is
* for destination-resoruce-group.
*/
struct ipahal_reg_endp_init_rsrc_grp {
u32 rsrc_grp;
};
/*
* struct ipahal_reg_endp_init_mode - IPA ENDP_INIT_MODE_n register
* @dst_pipe_number: This parameter specifies destination output-pipe-packets
* will be routed to. Valid for DMA mode only and for Input
* Pipes only (IPA Consumer)
*/
struct ipahal_reg_endp_init_mode {
u32 dst_pipe_number;
struct ipa_ep_cfg_mode ep_mode;
};
/*
* struct ipahal_reg_shared_mem_size - IPA SHARED_MEM_SIZE register
* @shared_mem_sz: Available size [in 8Bytes] of SW partition within
* IPA shared memory.
* @shared_mem_baddr: Offset of SW partition within IPA
* shared memory[in 8Bytes]. To get absolute address of SW partition,
* add this offset to IPA_SRAM_DIRECT_ACCESS_n baddr.
*/
struct ipahal_reg_shared_mem_size {
u32 shared_mem_sz;
u32 shared_mem_baddr;
};
/*
* struct ipahal_reg_ep_cfg_status - status configuration in IPA end-point
* @status_en: Determines if end point supports Status Indications. SW should
* set this bit in order to enable Statuses. Output Pipe - send
* Status indications only if bit is set. Input Pipe - forward Status
* indication to STATUS_ENDP only if bit is set. Valid for Input
* and Output Pipes (IPA Consumer and Producer)
* @status_ep: Statuses generated for this endpoint will be forwarded to the
* specified Status End Point. Status endpoint needs to be
* configured with STATUS_EN=1 Valid only for Input Pipes (IPA
* Consumer)
* @status_location: Location of PKT-STATUS on destination pipe.
* If set to 0 (default), PKT-STATUS will be appended before the packet
* for this endpoint. If set to 1, PKT-STATUS will be appended after the
* packet for this endpoint. Valid only for Output Pipes (IPA Producer)
*/
struct ipahal_reg_ep_cfg_status {
bool status_en;
u8 status_ep;
bool status_location;
};
/*
* struct ipa_hash_tuple - Hash tuple members for flt and rt
* the fields tells if to be masked or not
* @src_id: pipe number for flt, table index for rt
* @src_ip_addr: IP source address
* @dst_ip_addr: IP destination address
* @src_port: L4 source port
* @dst_port: L4 destination port
* @protocol: IP protocol field
* @meta_data: packet meta-data
*
*/
struct ipahal_reg_hash_tuple {
/* src_id: pipe in flt, tbl index in rt */
bool src_id;
bool src_ip_addr;
bool dst_ip_addr;
bool src_port;
bool dst_port;
bool protocol;
bool meta_data;
};
/*
* struct ipahal_reg_fltrt_hash_tuple - IPA hash tuple register
* @flt: Hash tuple info for filtering
* @rt: Hash tuple info for routing
* @undefinedX: Undefined/Unused bit fields set of the register
*/
struct ipahal_reg_fltrt_hash_tuple {
struct ipahal_reg_hash_tuple flt;
struct ipahal_reg_hash_tuple rt;
u32 undefined1;
u32 undefined2;
};
/*
* enum ipahal_reg_dbg_cnt_type - Debug Counter Type
* DBG_CNT_TYPE_IPV4_FLTR - Count IPv4 filtering rules
* DBG_CNT_TYPE_IPV4_ROUT - Count IPv4 routing rules
* DBG_CNT_TYPE_GENERAL - General counter
* DBG_CNT_TYPE_IPV6_FLTR - Count IPv6 filtering rules
* DBG_CNT_TYPE_IPV4_ROUT - Count IPv6 routing rules
*/
enum ipahal_reg_dbg_cnt_type {
DBG_CNT_TYPE_IPV4_FLTR,
DBG_CNT_TYPE_IPV4_ROUT,
DBG_CNT_TYPE_GENERAL,
DBG_CNT_TYPE_IPV6_FLTR,
DBG_CNT_TYPE_IPV6_ROUT,
};
/*
* struct ipahal_reg_debug_cnt_ctrl - IPA_DEBUG_CNT_CTRL_n register
* @en - Enable debug counter
* @type - Type of debugging couting
* @product - False->Count Bytes . True->Count #packets
* @src_pipe - Specific Pipe to match. If FF, no need to match
* specific pipe
* @rule_idx_pipe_rule - Global Rule or Pipe Rule. If pipe, then indicated by
* src_pipe. Starting at IPA V3_5,
* no support on Global Rule. This field will be ignored.
* @rule_idx - Rule index. Irrelevant for type General
*/
struct ipahal_reg_debug_cnt_ctrl {
bool en;
enum ipahal_reg_dbg_cnt_type type;
bool product;
u8 src_pipe;
bool rule_idx_pipe_rule;
u16 rule_idx;
};
/*
* struct ipahal_reg_rsrc_grp_cfg - Mix/Max values for two rsrc groups
* @x_min - first group min value
* @x_max - first group max value
* @y_min - second group min value
* @y_max - second group max value
*/
struct ipahal_reg_rsrc_grp_cfg {
u32 x_min;
u32 x_max;
u32 y_min;
u32 y_max;
};
/*
* struct ipahal_reg_rx_hps_clients - Min or Max values for RX HPS clients
* @client_minmax - Min or Max values. In case of depth 0 the 4 values
* are used. In case of depth 1, only the first 2 values are used
*/
struct ipahal_reg_rx_hps_clients {
u32 client_minmax[4];
};
/*
* struct ipahal_reg_valmask - holding values and masking for registers
* HAL application may require only value and mask of it for some
* register fields.
* @val - The value
* @mask - Tha mask of the value
*/
struct ipahal_reg_valmask {
u32 val;
u32 mask;
};
/*
* struct ipahal_reg_fltrt_hash_flush - Flt/Rt flush configuration
* @v6_rt - Flush IPv6 Routing cache
* @v6_flt - Flush IPv6 Filtering cache
* @v4_rt - Flush IPv4 Routing cache
* @v4_flt - Flush IPv4 Filtering cache
*/
struct ipahal_reg_fltrt_hash_flush {
bool v6_rt;
bool v6_flt;
bool v4_rt;
bool v4_flt;
};
/*
* struct ipahal_reg_single_ndp_mode - IPA SINGLE_NDP_MODE register
* @single_ndp_en: When set to '1', IPA builds MBIM frames with up to 1
* NDP-header.
* @unused: undefined bits of the register
*/
struct ipahal_reg_single_ndp_mode {
bool single_ndp_en;
u32 undefined;
};
/*
* struct ipahal_reg_qcncm - IPA QCNCM register
* @mode_en: When QCNCM_MODE_EN=1, IPA will use QCNCM signature.
* @mode_val: Used only when QCNCM_MODE_EN=1 and sets SW Signature in
* the NDP header.
* @unused: undefined bits of the register
*/
struct ipahal_reg_qcncm {
bool mode_en;
u32 mode_val;
u32 undefined;
};
/*
* struct ipahal_reg_tx_cfg - IPA TX_CFG register
* @tx0_prefetch_disable: Disable prefetch on TX0
* @tx1_prefetch_disable: Disable prefetch on TX1
* @prefetch_almost_empty_size: Prefetch almost empty size
*/
struct ipahal_reg_tx_cfg {
bool tx0_prefetch_disable;
bool tx1_prefetch_disable;
u16 prefetch_almost_empty_size;
};
/*
* ipahal_reg_name_str() - returns string that represent the register
* @reg_name: [in] register name
*/
const char *ipahal_reg_name_str(enum ipahal_reg_name reg_name);
/*
* ipahal_read_reg_n() - Get the raw value of n parameterized reg
*/
u32 ipahal_read_reg_n(enum ipahal_reg_name reg, u32 n);
/*
* ipahal_write_reg_mn() - Write to m/n parameterized reg a raw value
*/
void ipahal_write_reg_mn(enum ipahal_reg_name reg, u32 m, u32 n, u32 val);
/*
* ipahal_write_reg_n() - Write to n parameterized reg a raw value
*/
static inline void ipahal_write_reg_n(enum ipahal_reg_name reg,
u32 n, u32 val)
{
ipahal_write_reg_mn(reg, 0, n, val);
}
/*
* ipahal_read_reg_n_fields() - Get the parsed value of n parameterized reg
*/
u32 ipahal_read_reg_n_fields(enum ipahal_reg_name reg, u32 n, void *fields);
/*
* ipahal_write_reg_n_fields() - Write to n parameterized reg a prased value
*/
void ipahal_write_reg_n_fields(enum ipahal_reg_name reg, u32 n,
const void *fields);
/*
* ipahal_read_reg() - Get the raw value of a reg
*/
static inline u32 ipahal_read_reg(enum ipahal_reg_name reg)
{
return ipahal_read_reg_n(reg, 0);
}
/*
* ipahal_write_reg() - Write to reg a raw value
*/
static inline void ipahal_write_reg(enum ipahal_reg_name reg,
u32 val)
{
ipahal_write_reg_mn(reg, 0, 0, val);
}
/*
* ipahal_read_reg_fields() - Get the parsed value of a reg
*/
static inline u32 ipahal_read_reg_fields(enum ipahal_reg_name reg, void *fields)
{
return ipahal_read_reg_n_fields(reg, 0, fields);
}
/*
* ipahal_write_reg_fields() - Write to reg a parsed value
*/
static inline void ipahal_write_reg_fields(enum ipahal_reg_name reg,
const void *fields)
{
ipahal_write_reg_n_fields(reg, 0, fields);
}
/*
* Get the offset of a m/n parameterized register
*/
u32 ipahal_get_reg_mn_ofst(enum ipahal_reg_name reg, u32 m, u32 n);
/*
* Get the offset of a n parameterized register
*/
static inline u32 ipahal_get_reg_n_ofst(enum ipahal_reg_name reg, u32 n)
{
return ipahal_get_reg_mn_ofst(reg, 0, n);
}
/*
* Get the offset of a register
*/
static inline u32 ipahal_get_reg_ofst(enum ipahal_reg_name reg)
{
return ipahal_get_reg_mn_ofst(reg, 0, 0);
}
/*
* Get the register base address
*/
u32 ipahal_get_reg_base(void);
/*
* Specific functions
* These functions supply specific register values for specific operations
* that cannot be reached by generic functions.
* E.g. To disable aggregation, need to write to specific bits of the AGGR
* register. The other bits should be untouched. This oeprate is very specific
* and cannot be generically defined. For such operations we define these
* specific functions.
*/
void ipahal_get_disable_aggr_valmask(struct ipahal_reg_valmask *valmask);
u32 ipahal_aggr_get_max_byte_limit(void);
u32 ipahal_aggr_get_max_pkt_limit(void);
void ipahal_get_aggr_force_close_valmask(int ep_idx,
struct ipahal_reg_valmask *valmask);
void ipahal_get_fltrt_hash_flush_valmask(
struct ipahal_reg_fltrt_hash_flush *flush,
struct ipahal_reg_valmask *valmask);
void ipahal_get_status_ep_valmask(int pipe_num,
struct ipahal_reg_valmask *valmask);
#endif /* _IPAHAL_REG_H_ */

View File

@@ -0,0 +1,315 @@
/* Copyright (c) 2012-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _IPAHAL_REG_I_H_
#define _IPAHAL_REG_I_H_
int ipahal_reg_init(enum ipa_hw_type ipa_hw_type);
#define IPA_SETFIELD(val, shift, mask) (((val) << (shift)) & (mask))
#define IPA_SETFIELD_IN_REG(reg, val, shift, mask) \
(reg |= ((val) << (shift)) & (mask))
#define IPA_GETFIELD_FROM_REG(reg, shift, mask) \
(((reg) & (mask)) >> (shift))
/* IPA_ROUTE register */
#define IPA_ROUTE_ROUTE_DIS_SHFT 0x0
#define IPA_ROUTE_ROUTE_DIS_BMSK 0x1
#define IPA_ROUTE_ROUTE_DEF_PIPE_SHFT 0x1
#define IPA_ROUTE_ROUTE_DEF_PIPE_BMSK 0x3e
#define IPA_ROUTE_ROUTE_DEF_HDR_TABLE_SHFT 0x6
#define IPA_ROUTE_ROUTE_DEF_HDR_TABLE_BMSK 0X40
#define IPA_ROUTE_ROUTE_DEF_HDR_OFST_SHFT 0x7
#define IPA_ROUTE_ROUTE_DEF_HDR_OFST_BMSK 0x1ff80
#define IPA_ROUTE_ROUTE_FRAG_DEF_PIPE_BMSK 0x3e0000
#define IPA_ROUTE_ROUTE_FRAG_DEF_PIPE_SHFT 0x11
#define IPA_ROUTE_ROUTE_DEF_RETAIN_HDR_BMSK 0x1000000
#define IPA_ROUTE_ROUTE_DEF_RETAIN_HDR_SHFT 0x18
/* IPA_ENDP_INIT_HDR_n register */
#define IPA_ENDP_INIT_HDR_n_HDR_LEN_BMSK 0x3f
#define IPA_ENDP_INIT_HDR_n_HDR_LEN_SHFT 0x0
#define IPA_ENDP_INIT_HDR_n_HDR_OFST_METADATA_VALID_BMSK 0x40
#define IPA_ENDP_INIT_HDR_n_HDR_OFST_METADATA_VALID_SHFT 0x6
#define IPA_ENDP_INIT_HDR_n_HDR_OFST_METADATA_SHFT 0x7
#define IPA_ENDP_INIT_HDR_n_HDR_OFST_METADATA_BMSK 0x1f80
#define IPA_ENDP_INIT_HDR_n_HDR_ADDITIONAL_CONST_LEN_BMSK 0x7e000
#define IPA_ENDP_INIT_HDR_n_HDR_ADDITIONAL_CONST_LEN_SHFT 0xd
#define IPA_ENDP_INIT_HDR_n_HDR_OFST_PKT_SIZE_VALID_BMSK 0x80000
#define IPA_ENDP_INIT_HDR_n_HDR_OFST_PKT_SIZE_VALID_SHFT 0x13
#define IPA_ENDP_INIT_HDR_n_HDR_OFST_PKT_SIZE_BMSK 0x3f00000
#define IPA_ENDP_INIT_HDR_n_HDR_OFST_PKT_SIZE_SHFT 0x14
#define IPA_ENDP_INIT_HDR_n_HDR_A5_MUX_BMSK 0x4000000
#define IPA_ENDP_INIT_HDR_n_HDR_A5_MUX_SHFT 0x1a
#define IPA_ENDP_INIT_HDR_n_HDR_LEN_INC_DEAGG_HDR_BMSK_v2 0x8000000
#define IPA_ENDP_INIT_HDR_n_HDR_LEN_INC_DEAGG_HDR_SHFT_v2 0x1b
#define IPA_ENDP_INIT_HDR_n_HDR_METADATA_REG_VALID_BMSK_v2 0x10000000
#define IPA_ENDP_INIT_HDR_n_HDR_METADATA_REG_VALID_SHFT_v2 0x1c
/* IPA_ENDP_INIT_HDR_EXT_n register */
#define IPA_ENDP_INIT_HDR_EXT_n_HDR_ENDIANNESS_BMSK 0x1
#define IPA_ENDP_INIT_HDR_EXT_n_HDR_ENDIANNESS_SHFT 0x0
#define IPA_ENDP_INIT_HDR_EXT_n_HDR_TOTAL_LEN_OR_PAD_VALID_BMSK 0x2
#define IPA_ENDP_INIT_HDR_EXT_n_HDR_TOTAL_LEN_OR_PAD_VALID_SHFT 0x1
#define IPA_ENDP_INIT_HDR_EXT_n_HDR_TOTAL_LEN_OR_PAD_BMSK 0x4
#define IPA_ENDP_INIT_HDR_EXT_n_HDR_TOTAL_LEN_OR_PAD_SHFT 0x2
#define IPA_ENDP_INIT_HDR_EXT_n_HDR_PAYLOAD_LEN_INC_PADDING_BMSK 0x8
#define IPA_ENDP_INIT_HDR_EXT_n_HDR_PAYLOAD_LEN_INC_PADDING_SHFT 0x3
#define IPA_ENDP_INIT_HDR_EXT_n_HDR_TOTAL_LEN_OR_PAD_OFFSET_BMSK 0x3f0
#define IPA_ENDP_INIT_HDR_EXT_n_HDR_TOTAL_LEN_OR_PAD_OFFSET_SHFT 0x4
#define IPA_ENDP_INIT_HDR_EXT_n_HDR_PAD_TO_ALIGNMENT_SHFT 0xa
#define IPA_ENDP_INIT_HDR_EXT_n_HDR_PAD_TO_ALIGNMENT_BMSK_v3_0 0x3c00
/* IPA_ENDP_INIT_AGGR_N register */
#define IPA_ENDP_INIT_AGGR_n_AGGR_HARD_BYTE_LIMIT_ENABLE_BMSK 0x1000000
#define IPA_ENDP_INIT_AGGR_n_AGGR_HARD_BYTE_LIMIT_ENABLE_SHFT 0x18
#define IPA_ENDP_INIT_AGGR_n_AGGR_FORCE_CLOSE_BMSK 0x400000
#define IPA_ENDP_INIT_AGGR_n_AGGR_FORCE_CLOSE_SHFT 0x16
#define IPA_ENDP_INIT_AGGR_n_AGGR_SW_EOF_ACTIVE_BMSK 0x200000
#define IPA_ENDP_INIT_AGGR_n_AGGR_SW_EOF_ACTIVE_SHFT 0x15
#define IPA_ENDP_INIT_AGGR_n_AGGR_PKT_LIMIT_BMSK 0x1f8000
#define IPA_ENDP_INIT_AGGR_n_AGGR_PKT_LIMIT_SHFT 0xf
#define IPA_ENDP_INIT_AGGR_n_AGGR_TIME_LIMIT_BMSK 0x7c00
#define IPA_ENDP_INIT_AGGR_n_AGGR_TIME_LIMIT_SHFT 0xa
#define IPA_ENDP_INIT_AGGR_n_AGGR_BYTE_LIMIT_BMSK 0x3e0
#define IPA_ENDP_INIT_AGGR_n_AGGR_BYTE_LIMIT_SHFT 0x5
#define IPA_ENDP_INIT_AGGR_n_AGGR_TYPE_BMSK 0x1c
#define IPA_ENDP_INIT_AGGR_n_AGGR_TYPE_SHFT 0x2
#define IPA_ENDP_INIT_AGGR_n_AGGR_EN_BMSK 0x3
#define IPA_ENDP_INIT_AGGR_n_AGGR_EN_SHFT 0x0
/* IPA_AGGR_FORCE_CLOSE register */
#define IPA_AGGR_FORCE_CLOSE_AGGR_FORCE_CLOSE_PIPE_BITMAP_BMSK 0x3fffffff
#define IPA_AGGR_FORCE_CLOSE_AGGR_FORCE_CLOSE_PIPE_BITMAP_SHFT 0
#define IPA_AGGR_FORCE_CLOSE_AGGR_FORCE_CLOSE_PIPE_BITMAP_BMSK_V3_5 0xfffff
#define IPA_AGGR_FORCE_CLOSE_AGGR_FORCE_CLOSE_PIPE_BITMAP_SHFT_V3_5 0
/* IPA_ENDP_INIT_ROUTE_n register */
#define IPA_ENDP_INIT_ROUTE_n_ROUTE_TABLE_INDEX_BMSK 0x1f
#define IPA_ENDP_INIT_ROUTE_n_ROUTE_TABLE_INDEX_SHFT 0x0
/* IPA_ENDP_INIT_MODE_n register */
#define IPA_ENDP_INIT_MODE_n_HDR_FTCH_DISABLE_BMSK 0x40000000
#define IPA_ENDP_INIT_MODE_n_HDR_FTCH_DISABLE_SHFT 0x1e
#define IPA_ENDP_INIT_MODE_n_PAD_EN_BMSK 0x20000000
#define IPA_ENDP_INIT_MODE_n_PAD_EN_SHFT 0x1d
#define IPA_ENDP_INIT_MODE_n_PIPE_REPLICATION_EN_BMSK 0x10000000
#define IPA_ENDP_INIT_MODE_n_PIPE_REPLICATION_EN_SHFT 0x1c
#define IPA_ENDP_INIT_MODE_n_BYTE_THRESHOLD_BMSK 0xffff000
#define IPA_ENDP_INIT_MODE_n_BYTE_THRESHOLD_SHFT 0xc
#define IPA_ENDP_INIT_MODE_n_DEST_PIPE_INDEX_BMSK 0x1f0
#define IPA_ENDP_INIT_MODE_n_DEST_PIPE_INDEX_SHFT 0x4
#define IPA_ENDP_INIT_MODE_n_MODE_BMSK 0x7
#define IPA_ENDP_INIT_MODE_n_MODE_SHFT 0x0
/* IPA_ENDP_INIT_NAT_n register */
#define IPA_ENDP_INIT_NAT_n_NAT_EN_BMSK 0x3
#define IPA_ENDP_INIT_NAT_n_NAT_EN_SHFT 0x0
/* IPA_ENDP_INIT_CTRL_n register */
#define IPA_ENDP_INIT_CTRL_n_ENDP_SUSPEND_BMSK 0x1
#define IPA_ENDP_INIT_CTRL_n_ENDP_SUSPEND_SHFT 0x0
#define IPA_ENDP_INIT_CTRL_n_ENDP_DELAY_BMSK 0x2
#define IPA_ENDP_INIT_CTRL_n_ENDP_DELAY_SHFT 0x1
/* IPA_ENDP_INIT_HOL_BLOCK_EN_n register */
#define IPA_ENDP_INIT_HOL_BLOCK_EN_n_RMSK 0x1
#define IPA_ENDP_INIT_HOL_BLOCK_EN_n_MAX 19
#define IPA_ENDP_INIT_HOL_BLOCK_EN_n_EN_BMSK 0x1
#define IPA_ENDP_INIT_HOL_BLOCK_EN_n_EN_SHFT 0x0
/* IPA_ENDP_INIT_HOL_BLOCK_TIMER_n register */
#define IPA_ENDP_INIT_HOL_BLOCK_TIMER_n_TIMER_BMSK 0xffffffff
#define IPA_ENDP_INIT_HOL_BLOCK_TIMER_n_TIMER_SHFT 0x0
/* IPA_ENDP_INIT_DEAGGR_n register */
#define IPA_ENDP_INIT_DEAGGR_n_MAX_PACKET_LEN_BMSK 0xFFFF0000
#define IPA_ENDP_INIT_DEAGGR_n_MAX_PACKET_LEN_SHFT 0x10
#define IPA_ENDP_INIT_DEAGGR_n_PACKET_OFFSET_LOCATION_BMSK 0x3F00
#define IPA_ENDP_INIT_DEAGGR_n_PACKET_OFFSET_LOCATION_SHFT 0x8
#define IPA_ENDP_INIT_DEAGGR_n_PACKET_OFFSET_VALID_BMSK 0x80
#define IPA_ENDP_INIT_DEAGGR_n_PACKET_OFFSET_VALID_SHFT 0x7
#define IPA_ENDP_INIT_DEAGGR_n_DEAGGR_HDR_LEN_BMSK 0x3F
#define IPA_ENDP_INIT_DEAGGR_n_DEAGGR_HDR_LEN_SHFT 0x0
/* IPA_IPA_ENDP_INIT_SEQ_n register */
#define IPA_ENDP_INIT_SEQ_n_DPS_REP_SEQ_TYPE_BMSK 0xf000
#define IPA_ENDP_INIT_SEQ_n_DPS_REP_SEQ_TYPE_SHFT 0xc
#define IPA_ENDP_INIT_SEQ_n_HPS_REP_SEQ_TYPE_BMSK 0xf00
#define IPA_ENDP_INIT_SEQ_n_HPS_REP_SEQ_TYPE_SHFT 0x8
#define IPA_ENDP_INIT_SEQ_n_DPS_SEQ_TYPE_BMSK 0xf0
#define IPA_ENDP_INIT_SEQ_n_DPS_SEQ_TYPE_SHFT 0x4
#define IPA_ENDP_INIT_SEQ_n_HPS_SEQ_TYPE_BMSK 0xf
#define IPA_ENDP_INIT_SEQ_n_HPS_SEQ_TYPE_SHFT 0x0
/* IPA_DEBUG_CNT_REG_m register */
#define IPA_DEBUG_CNT_REG_N_RMSK 0xffffffff
#define IPA_DEBUG_CNT_REG_N_MAX 15
#define IPA_DEBUG_CNT_REG_N_DBG_CNT_REG_BMSK 0xffffffff
#define IPA_DEBUG_CNT_REG_N_DBG_CNT_REG_SHFT 0x0
/* IPA_ENDP_INIT_CFG_n register */
#define IPA_ENDP_INIT_CFG_n_CS_GEN_QMB_MASTER_SEL_BMSK 0x100
#define IPA_ENDP_INIT_CFG_n_CS_GEN_QMB_MASTER_SEL_SHFT 0x8
#define IPA_ENDP_INIT_CFG_n_CS_METADATA_HDR_OFFSET_BMSK 0x78
#define IPA_ENDP_INIT_CFG_n_CS_METADATA_HDR_OFFSET_SHFT 0x3
#define IPA_ENDP_INIT_CFG_n_CS_OFFLOAD_EN_BMSK 0x6
#define IPA_ENDP_INIT_CFG_n_CS_OFFLOAD_EN_SHFT 0x1
#define IPA_ENDP_INIT_CFG_n_FRAG_OFFLOAD_EN_BMSK 0x1
#define IPA_ENDP_INIT_CFG_n_FRAG_OFFLOAD_EN_SHFT 0x0
/* IPA_ENDP_INIT_HDR_METADATA_MASK_n register */
#define IPA_ENDP_INIT_HDR_METADATA_MASK_n_METADATA_MASK_BMSK 0xffffffff
#define IPA_ENDP_INIT_HDR_METADATA_MASK_n_METADATA_MASK_SHFT 0x0
/* IPA_IPA_ENDP_INIT_HDR_METADATA_n register */
#define IPA_ENDP_INIT_HDR_METADATA_n_METADATA_BMSK 0xffffffff
#define IPA_ENDP_INIT_HDR_METADATA_n_METADATA_SHFT 0x0
/* IPA_ENDP_INIT_RSRC_GRP_n register */
#define IPA_ENDP_INIT_RSRC_GRP_n_RSRC_GRP_BMSK 0x7
#define IPA_ENDP_INIT_RSRC_GRP_n_RSRC_GRP_SHFT 0
#define IPA_ENDP_INIT_RSRC_GRP_n_RSRC_GRP_BMSK_v3_5 0x3
#define IPA_ENDP_INIT_RSRC_GRP_n_RSRC_GRP_SHFT_v3_5 0
/* IPA_SHARED_MEM_SIZE register */
#define IPA_SHARED_MEM_SIZE_SHARED_MEM_BADDR_BMSK 0xffff0000
#define IPA_SHARED_MEM_SIZE_SHARED_MEM_BADDR_SHFT 0x10
#define IPA_SHARED_MEM_SIZE_SHARED_MEM_SIZE_BMSK 0xffff
#define IPA_SHARED_MEM_SIZE_SHARED_MEM_SIZE_SHFT 0x0
/* IPA_DEBUG_CNT_CTRL_n register */
#define IPA_DEBUG_CNT_CTRL_n_DBG_CNT_RULE_INDEX_PIPE_RULE_BMSK 0x10000000
#define IPA_DEBUG_CNT_CTRL_n_DBG_CNT_RULE_INDEX_PIPE_RULE_SHFT 0x1c
#define IPA_DEBUG_CNT_CTRL_n_DBG_CNT_RULE_INDEX_BMSK 0x0ff00000
#define IPA_DEBUG_CNT_CTRL_n_DBG_CNT_RULE_INDEX_BMSK_V3_5 0x1ff00000
#define IPA_DEBUG_CNT_CTRL_n_DBG_CNT_RULE_INDEX_SHFT 0x14
#define IPA_DEBUG_CNT_CTRL_n_DBG_CNT_SOURCE_PIPE_BMSK 0x1f000
#define IPA_DEBUG_CNT_CTRL_n_DBG_CNT_SOURCE_PIPE_SHFT 0xc
#define IPA_DEBUG_CNT_CTRL_n_DBG_CNT_PRODUCT_BMSK 0x100
#define IPA_DEBUG_CNT_CTRL_n_DBG_CNT_PRODUCT_SHFT 0x8
#define IPA_DEBUG_CNT_CTRL_n_DBG_CNT_TYPE_BMSK 0x70
#define IPA_DEBUG_CNT_CTRL_n_DBG_CNT_TYPE_SHFT 0x4
#define IPA_DEBUG_CNT_CTRL_n_DBG_CNT_EN_BMSK 0x1
#define IPA_DEBUG_CNT_CTRL_n_DBG_CNT_EN_SHFT 0x0
/* IPA_FILT_ROUT_HASH_FLUSH register */
#define IPA_FILT_ROUT_HASH_FLUSH_IPv4_FILT_SHFT 12
#define IPA_FILT_ROUT_HASH_FLUSH_IPv4_ROUT_SHFT 8
#define IPA_FILT_ROUT_HASH_FLUSH_IPv6_FILT_SHFT 4
#define IPA_FILT_ROUT_HASH_FLUSH_IPv6_ROUT_SHFT 0
/* IPA_SINGLE_NDP_MODE register */
#define IPA_SINGLE_NDP_MODE_UNDEFINED_BMSK 0xfffffffe
#define IPA_SINGLE_NDP_MODE_UNDEFINED_SHFT 0x1
#define IPA_SINGLE_NDP_MODE_SINGLE_NDP_EN_BMSK 0x1
#define IPA_SINGLE_NDP_MODE_SINGLE_NDP_EN_SHFT 0
/* IPA_QCNCM register */
#define IPA_QCNCM_MODE_UNDEFINED2_BMSK 0xf0000000
#define IPA_QCNCM_MODE_UNDEFINED2_SHFT 0x1c
#define IPA_QCNCM_MODE_VAL_BMSK 0xffffff0
#define IPA_QCNCM_MODE_VAL_SHFT 0x4
#define IPA_QCNCM_UNDEFINED1_BMSK 0xe
#define IPA_QCNCM_UNDEFINED1_SHFT 0x1
#define IPA_QCNCM_MODE_EN_BMSK 0x1
#define IPA_QCNCM_MODE_EN_SHFT 0
/* IPA_ENDP_STATUS_n register */
#define IPA_ENDP_STATUS_n_STATUS_LOCATION_BMSK 0x100
#define IPA_ENDP_STATUS_n_STATUS_LOCATION_SHFT 0x8
#define IPA_ENDP_STATUS_n_STATUS_ENDP_BMSK 0x3e
#define IPA_ENDP_STATUS_n_STATUS_ENDP_SHFT 0x1
#define IPA_ENDP_STATUS_n_STATUS_EN_BMSK 0x1
#define IPA_ENDP_STATUS_n_STATUS_EN_SHFT 0x0
/* IPA_ENDP_FILTER_ROUTER_HSH_CFG_n register */
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_FILTER_HASH_MSK_SRC_ID_SHFT 0
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_FILTER_HASH_MSK_SRC_ID_BMSK 0x1
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_FILTER_HASH_MSK_SRC_IP_SHFT 1
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_FILTER_HASH_MSK_SRC_IP_BMSK 0x2
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_FILTER_HASH_MSK_DST_IP_SHFT 2
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_FILTER_HASH_MSK_DST_IP_BMSK 0x4
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_FILTER_HASH_MSK_SRC_PORT_SHFT 3
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_FILTER_HASH_MSK_SRC_PORT_BMSK 0x8
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_FILTER_HASH_MSK_DST_PORT_SHFT 4
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_FILTER_HASH_MSK_DST_PORT_BMSK 0x10
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_FILTER_HASH_MSK_PROTOCOL_SHFT 5
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_FILTER_HASH_MSK_PROTOCOL_BMSK 0x20
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_FILTER_HASH_MSK_METADATA_SHFT 6
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_FILTER_HASH_MSK_METADATA_BMSK 0x40
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_UNDEFINED1_SHFT 7
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_UNDEFINED1_BMSK 0xff80
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_ROUTER_HASH_MSK_SRC_ID_SHFT 16
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_ROUTER_HASH_MSK_SRC_ID_BMSK 0x10000
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_ROUTER_HASH_MSK_SRC_IP_SHFT 17
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_ROUTER_HASH_MSK_SRC_IP_BMSK 0x20000
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_ROUTER_HASH_MSK_DST_IP_SHFT 18
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_ROUTER_HASH_MSK_DST_IP_BMSK 0x40000
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_ROUTER_HASH_MSK_SRC_PORT_SHFT 19
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_ROUTER_HASH_MSK_SRC_PORT_BMSK 0x80000
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_ROUTER_HASH_MSK_DST_PORT_SHFT 20
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_ROUTER_HASH_MSK_DST_PORT_BMSK 0x100000
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_ROUTER_HASH_MSK_PROTOCOL_SHFT 21
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_ROUTER_HASH_MSK_PROTOCOL_BMSK 0x200000
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_ROUTER_HASH_MSK_METADATA_SHFT 22
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_ROUTER_HASH_MSK_METADATA_BMSK 0x400000
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_UNDEFINED2_SHFT 23
#define IPA_ENDP_FILTER_ROUTER_HSH_CFG_n_UNDEFINED2_BMSK 0xff800000
/* IPA_RSRC_GRP_XY_RSRC_TYPE_n register */
#define IPA_RSRC_GRP_XY_RSRC_TYPE_n_Y_MAX_LIM_BMSK 0xFF000000
#define IPA_RSRC_GRP_XY_RSRC_TYPE_n_Y_MAX_LIM_SHFT 24
#define IPA_RSRC_GRP_XY_RSRC_TYPE_n_Y_MIN_LIM_BMSK 0xFF0000
#define IPA_RSRC_GRP_XY_RSRC_TYPE_n_Y_MIN_LIM_SHFT 16
#define IPA_RSRC_GRP_XY_RSRC_TYPE_n_X_MAX_LIM_BMSK 0xFF00
#define IPA_RSRC_GRP_XY_RSRC_TYPE_n_X_MAX_LIM_SHFT 8
#define IPA_RSRC_GRP_XY_RSRC_TYPE_n_X_MIN_LIM_BMSK 0xFF
#define IPA_RSRC_GRP_XY_RSRC_TYPE_n_X_MIN_LIM_SHFT 0
#define IPA_RSRC_GRP_XY_RSRC_TYPE_n_Y_MAX_LIM_BMSK_V3_5 0x3F000000
#define IPA_RSRC_GRP_XY_RSRC_TYPE_n_Y_MAX_LIM_SHFT_V3_5 24
#define IPA_RSRC_GRP_XY_RSRC_TYPE_n_Y_MIN_LIM_BMSK_V3_5 0x3F0000
#define IPA_RSRC_GRP_XY_RSRC_TYPE_n_Y_MIN_LIM_SHFT_V3_5 16
#define IPA_RSRC_GRP_XY_RSRC_TYPE_n_X_MAX_LIM_BMSK_V3_5 0x3F00
#define IPA_RSRC_GRP_XY_RSRC_TYPE_n_X_MAX_LIM_SHFT_V3_5 8
#define IPA_RSRC_GRP_XY_RSRC_TYPE_n_X_MIN_LIM_BMSK_V3_5 0x3F
#define IPA_RSRC_GRP_XY_RSRC_TYPE_n_X_MIN_LIM_SHFT_V3_5 0
/* IPA_IPA_IPA_RX_HPS_CLIENTS_MIN/MAX_DEPTH_0/1 registers */
#define IPA_RX_HPS_CLIENTS_MINMAX_DEPTH_X_CLIENT_n_BMSK(n) (0x7F << (8 * (n)))
#define IPA_RX_HPS_CLIENTS_MINMAX_DEPTH_X_CLIENT_n_BMSK_V3_5(n) \
(0xF << (8 * (n)))
#define IPA_RX_HPS_CLIENTS_MINMAX_DEPTH_X_CLIENT_n_SHFT(n) (8 * (n))
/* IPA_QSB_MAX_WRITES register */
#define IPA_QSB_MAX_WRITES_GEN_QMB_0_MAX_WRITES_BMSK (0xf)
#define IPA_QSB_MAX_WRITES_GEN_QMB_0_MAX_WRITES_SHFT (0)
#define IPA_QSB_MAX_WRITES_GEN_QMB_1_MAX_WRITES_BMSK (0xf0)
#define IPA_QSB_MAX_WRITES_GEN_QMB_1_MAX_WRITES_SHFT (4)
/* IPA_QSB_MAX_READS register */
#define IPA_QSB_MAX_READS_GEN_QMB_0_MAX_READS_BMSK (0xf)
#define IPA_QSB_MAX_READS_GEN_QMB_0_MAX_READS_SHFT (0)
#define IPA_QSB_MAX_READS_GEN_QMB_1_MAX_READS_BMSK (0xf0)
#define IPA_QSB_MAX_READS_GEN_QMB_1_MAX_READS_SHFT (4)
/* IPA_TX_CFG register */
#define IPA_TX_CFG_TX0_PREFETCH_DISABLE_BMSK_V3_5 (0x1)
#define IPA_TX_CFG_TX0_PREFETCH_DISABLE_SHFT_V3_5 (0)
#define IPA_TX_CFG_TX1_PREFETCH_DISABLE_BMSK_V3_5 (0x2)
#define IPA_TX_CFG_TX1_PREFETCH_DISABLE_SHFT_V3_5 (1)
#define IPA_TX_CFG_PREFETCH_ALMOST_EMPTY_SIZE_BMSK_V3_5 (0x1C)
#define IPA_TX_CFG_PREFETCH_ALMOST_EMPTY_SIZE_SHFT_V3_5 (2)
#endif /* _IPAHAL_REG_I_H_ */

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,391 @@
/* Copyright (c) 2013-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/init.h>
#include <linux/module.h>
#include <linux/fs.h>
#include <linux/cdev.h>
#include <linux/device.h>
#include <linux/slab.h>
#include <linux/uaccess.h>
#include <linux/rmnet_ipa_fd_ioctl.h>
#include "ipa_qmi_service.h"
#define DRIVER_NAME "wwan_ioctl"
#ifdef CONFIG_COMPAT
#define WAN_IOC_ADD_FLT_RULE32 _IOWR(WAN_IOC_MAGIC, \
WAN_IOCTL_ADD_FLT_RULE, \
compat_uptr_t)
#define WAN_IOC_ADD_FLT_RULE_INDEX32 _IOWR(WAN_IOC_MAGIC, \
WAN_IOCTL_ADD_FLT_INDEX, \
compat_uptr_t)
#define WAN_IOC_POLL_TETHERING_STATS32 _IOWR(WAN_IOC_MAGIC, \
WAN_IOCTL_POLL_TETHERING_STATS, \
compat_uptr_t)
#define WAN_IOC_SET_DATA_QUOTA32 _IOWR(WAN_IOC_MAGIC, \
WAN_IOCTL_SET_DATA_QUOTA, \
compat_uptr_t)
#define WAN_IOC_SET_TETHER_CLIENT_PIPE32 _IOWR(WAN_IOC_MAGIC, \
WAN_IOCTL_SET_TETHER_CLIENT_PIPE, \
compat_uptr_t)
#define WAN_IOC_QUERY_TETHER_STATS32 _IOWR(WAN_IOC_MAGIC, \
WAN_IOCTL_QUERY_TETHER_STATS, \
compat_uptr_t)
#define WAN_IOC_RESET_TETHER_STATS32 _IOWR(WAN_IOC_MAGIC, \
WAN_IOCTL_RESET_TETHER_STATS, \
compat_uptr_t)
#define WAN_IOC_QUERY_DL_FILTER_STATS32 _IOWR(WAN_IOC_MAGIC, \
WAN_IOCTL_QUERY_DL_FILTER_STATS, \
compat_uptr_t)
#endif
static unsigned int dev_num = 1;
static struct cdev ipa3_wan_ioctl_cdev;
static unsigned int ipa3_process_ioctl = 1;
static struct class *class;
static dev_t device;
static long ipa3_wan_ioctl(struct file *filp,
unsigned int cmd,
unsigned long arg)
{
int retval = 0;
u32 pyld_sz;
u8 *param = NULL;
IPAWANDBG("device %s got ioctl events :>>>\n",
DRIVER_NAME);
if (!ipa3_process_ioctl) {
IPAWANDBG("modem is in SSR, ignoring ioctl\n");
return -EAGAIN;
}
switch (cmd) {
case WAN_IOC_ADD_FLT_RULE:
IPAWANDBG("device %s got WAN_IOC_ADD_FLT_RULE :>>>\n",
DRIVER_NAME);
pyld_sz = sizeof(struct ipa_install_fltr_rule_req_msg_v01);
param = kzalloc(pyld_sz, GFP_KERNEL);
if (!param) {
retval = -ENOMEM;
break;
}
if (copy_from_user(param, (u8 *)arg, pyld_sz)) {
retval = -EFAULT;
break;
}
if (ipa3_qmi_filter_request_send(
(struct ipa_install_fltr_rule_req_msg_v01 *)param)) {
IPAWANDBG("IPACM->Q6 add filter rule failed\n");
retval = -EFAULT;
break;
}
if (copy_to_user((u8 *)arg, param, pyld_sz)) {
retval = -EFAULT;
break;
}
break;
case WAN_IOC_ADD_FLT_RULE_INDEX:
IPAWANDBG("device %s got WAN_IOC_ADD_FLT_RULE_INDEX :>>>\n",
DRIVER_NAME);
pyld_sz = sizeof(struct ipa_fltr_installed_notif_req_msg_v01);
param = kzalloc(pyld_sz, GFP_KERNEL);
if (!param) {
retval = -ENOMEM;
break;
}
if (copy_from_user(param, (u8 *)arg, pyld_sz)) {
retval = -EFAULT;
break;
}
if (ipa3_qmi_filter_notify_send(
(struct ipa_fltr_installed_notif_req_msg_v01 *)param)) {
IPAWANDBG("IPACM->Q6 rule index fail\n");
retval = -EFAULT;
break;
}
if (copy_to_user((u8 *)arg, param, pyld_sz)) {
retval = -EFAULT;
break;
}
break;
case WAN_IOC_VOTE_FOR_BW_MBPS:
IPAWANDBG("device %s got WAN_IOC_VOTE_FOR_BW_MBPS :>>>\n",
DRIVER_NAME);
pyld_sz = sizeof(uint32_t);
param = kzalloc(pyld_sz, GFP_KERNEL);
if (!param) {
retval = -ENOMEM;
break;
}
if (copy_from_user(param, (u8 *)arg, pyld_sz)) {
retval = -EFAULT;
break;
}
if (ipa3_vote_for_bus_bw((uint32_t *)param)) {
IPAWANERR("Failed to vote for bus BW\n");
retval = -EFAULT;
break;
}
if (copy_to_user((u8 *)arg, param, pyld_sz)) {
retval = -EFAULT;
break;
}
break;
case WAN_IOC_POLL_TETHERING_STATS:
IPAWANDBG_LOW("got WAN_IOCTL_POLL_TETHERING_STATS :>>>\n");
pyld_sz = sizeof(struct wan_ioctl_poll_tethering_stats);
param = kzalloc(pyld_sz, GFP_KERNEL);
if (!param) {
retval = -ENOMEM;
break;
}
if (copy_from_user(param, (u8 *)arg, pyld_sz)) {
retval = -EFAULT;
break;
}
if (rmnet_ipa3_poll_tethering_stats(
(struct wan_ioctl_poll_tethering_stats *)param)) {
IPAWANERR("WAN_IOCTL_POLL_TETHERING_STATS failed\n");
retval = -EFAULT;
break;
}
if (copy_to_user((u8 *)arg, param, pyld_sz)) {
retval = -EFAULT;
break;
}
break;
case WAN_IOC_SET_DATA_QUOTA:
IPAWANDBG_LOW("got WAN_IOCTL_SET_DATA_QUOTA :>>>\n");
pyld_sz = sizeof(struct wan_ioctl_set_data_quota);
param = kzalloc(pyld_sz, GFP_KERNEL);
if (!param) {
retval = -ENOMEM;
break;
}
if (copy_from_user(param, (u8 *)arg, pyld_sz)) {
retval = -EFAULT;
break;
}
if (rmnet_ipa3_set_data_quota(
(struct wan_ioctl_set_data_quota *)param)) {
IPAWANERR("WAN_IOC_SET_DATA_QUOTA failed\n");
retval = -EFAULT;
break;
}
if (copy_to_user((u8 *)arg, param, pyld_sz)) {
retval = -EFAULT;
break;
}
break;
case WAN_IOC_SET_TETHER_CLIENT_PIPE:
IPAWANDBG_LOW("got WAN_IOC_SET_TETHER_CLIENT_PIPE :>>>\n");
pyld_sz = sizeof(struct wan_ioctl_set_tether_client_pipe);
param = kzalloc(pyld_sz, GFP_KERNEL);
if (!param) {
retval = -ENOMEM;
break;
}
if (copy_from_user(param, (u8 *)arg, pyld_sz)) {
retval = -EFAULT;
break;
}
if (rmnet_ipa3_set_tether_client_pipe(
(struct wan_ioctl_set_tether_client_pipe *)param)) {
IPAWANERR("WAN_IOC_SET_TETHER_CLIENT_PIPE failed\n");
retval = -EFAULT;
break;
}
break;
case WAN_IOC_QUERY_TETHER_STATS:
IPAWANDBG_LOW("got WAN_IOC_QUERY_TETHER_STATS :>>>\n");
pyld_sz = sizeof(struct wan_ioctl_query_tether_stats);
param = kzalloc(pyld_sz, GFP_KERNEL);
if (!param) {
retval = -ENOMEM;
break;
}
if (copy_from_user(param, (u8 *)arg, pyld_sz)) {
retval = -EFAULT;
break;
}
if (rmnet_ipa3_query_tethering_stats(
(struct wan_ioctl_query_tether_stats *)param, false)) {
IPAWANERR("WAN_IOC_QUERY_TETHER_STATS failed\n");
retval = -EFAULT;
break;
}
if (copy_to_user((u8 *)arg, param, pyld_sz)) {
retval = -EFAULT;
break;
}
break;
case WAN_IOC_RESET_TETHER_STATS:
IPAWANDBG_LOW("device %s got WAN_IOC_RESET_TETHER_STATS :>>>\n",
DRIVER_NAME);
pyld_sz = sizeof(struct wan_ioctl_reset_tether_stats);
param = kzalloc(pyld_sz, GFP_KERNEL);
if (!param) {
retval = -ENOMEM;
break;
}
if (copy_from_user(param, (u8 *)arg, pyld_sz)) {
retval = -EFAULT;
break;
}
if (rmnet_ipa3_query_tethering_stats(NULL, true)) {
IPAWANERR("WAN_IOC_QUERY_TETHER_STATS failed\n");
retval = -EFAULT;
break;
}
break;
default:
retval = -ENOTTY;
}
kfree(param);
return retval;
}
#ifdef CONFIG_COMPAT
long ipa3_compat_wan_ioctl(struct file *file,
unsigned int cmd,
unsigned long arg)
{
switch (cmd) {
case WAN_IOC_ADD_FLT_RULE32:
cmd = WAN_IOC_ADD_FLT_RULE;
break;
case WAN_IOC_ADD_FLT_RULE_INDEX32:
cmd = WAN_IOC_ADD_FLT_RULE_INDEX;
break;
case WAN_IOC_POLL_TETHERING_STATS32:
cmd = WAN_IOC_POLL_TETHERING_STATS;
break;
case WAN_IOC_SET_DATA_QUOTA32:
cmd = WAN_IOC_SET_DATA_QUOTA;
break;
case WAN_IOC_SET_TETHER_CLIENT_PIPE32:
cmd = WAN_IOC_SET_TETHER_CLIENT_PIPE;
break;
case WAN_IOC_QUERY_TETHER_STATS32:
cmd = WAN_IOC_QUERY_TETHER_STATS;
break;
case WAN_IOC_RESET_TETHER_STATS32:
cmd = WAN_IOC_RESET_TETHER_STATS;
break;
case WAN_IOC_QUERY_DL_FILTER_STATS32:
cmd = WAN_IOC_QUERY_DL_FILTER_STATS;
break;
default:
return -ENOIOCTLCMD;
}
return ipa3_wan_ioctl(file, cmd, (unsigned long) compat_ptr(arg));
}
#endif
static int ipa3_wan_ioctl_open(struct inode *inode, struct file *filp)
{
IPAWANDBG("\n IPA A7 ipa3_wan_ioctl open OK :>>>> ");
return 0;
}
const struct file_operations rmnet_ipa3_fops = {
.owner = THIS_MODULE,
.open = ipa3_wan_ioctl_open,
.read = NULL,
.unlocked_ioctl = ipa3_wan_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = ipa3_compat_wan_ioctl,
#endif
};
int ipa3_wan_ioctl_init(void)
{
unsigned int wan_ioctl_major = 0;
int ret;
struct device *dev;
device = MKDEV(wan_ioctl_major, 0);
ret = alloc_chrdev_region(&device, 0, dev_num, DRIVER_NAME);
if (ret) {
IPAWANERR(":device_alloc err.\n");
goto dev_alloc_err;
}
wan_ioctl_major = MAJOR(device);
class = class_create(THIS_MODULE, DRIVER_NAME);
if (IS_ERR(class)) {
IPAWANERR(":class_create err.\n");
goto class_err;
}
dev = device_create(class, NULL, device,
NULL, DRIVER_NAME);
if (IS_ERR(dev)) {
IPAWANERR(":device_create err.\n");
goto device_err;
}
cdev_init(&ipa3_wan_ioctl_cdev, &rmnet_ipa3_fops);
ret = cdev_add(&ipa3_wan_ioctl_cdev, device, dev_num);
if (ret) {
IPAWANERR(":cdev_add err.\n");
goto cdev_add_err;
}
ipa3_process_ioctl = 1;
IPAWANDBG("IPA %s major(%d) initial ok :>>>>\n",
DRIVER_NAME, wan_ioctl_major);
return 0;
cdev_add_err:
device_destroy(class, device);
device_err:
class_destroy(class);
class_err:
unregister_chrdev_region(device, dev_num);
dev_alloc_err:
return -ENODEV;
}
void ipa3_wan_ioctl_stop_qmi_messages(void)
{
ipa3_process_ioctl = 0;
}
void ipa3_wan_ioctl_enable_qmi_messages(void)
{
ipa3_process_ioctl = 1;
}
void ipa3_wan_ioctl_deinit(void)
{
cdev_del(&ipa3_wan_ioctl_cdev);
device_destroy(class, device);
class_destroy(class);
unregister_chrdev_region(device, dev_num);
}

View File

@@ -0,0 +1,253 @@
/* Copyright (c) 2013-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/completion.h>
#include <linux/debugfs.h>
#include <linux/export.h>
#include <linux/fs.h>
#include <linux/if_ether.h>
#include <linux/ioctl.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/msm_ipa.h>
#include <linux/mutex.h>
#include <linux/skbuff.h>
#include <linux/types.h>
#include <linux/ipa.h>
#include <linux/netdevice.h>
#include "ipa_i.h"
#define TETH_BRIDGE_DRV_NAME "ipa_tethering_bridge"
#define TETH_DBG(fmt, args...) \
pr_debug(TETH_BRIDGE_DRV_NAME " %s:%d " fmt, \
__func__, __LINE__, ## args)
#define TETH_DBG_FUNC_ENTRY() \
pr_debug(TETH_BRIDGE_DRV_NAME " %s:%d ENTRY\n", __func__, __LINE__)
#define TETH_DBG_FUNC_EXIT() \
pr_debug(TETH_BRIDGE_DRV_NAME " %s:%d EXIT\n", __func__, __LINE__)
#define TETH_ERR(fmt, args...) \
pr_err(TETH_BRIDGE_DRV_NAME " %s:%d " fmt, __func__, __LINE__, ## args)
/**
* struct ipa3_teth_bridge_ctx - Tethering bridge driver context information
* @class: kernel class pointer
* @dev_num: kernel device number
* @dev: kernel device struct pointer
* @cdev: kernel character device struct
*/
struct ipa3_teth_bridge_ctx {
struct class *class;
dev_t dev_num;
struct device *dev;
struct cdev cdev;
};
static struct ipa3_teth_bridge_ctx *ipa3_teth_ctx;
/**
* teth_bridge_ipa_cb() - Callback to handle IPA data path events
* @priv - private data
* @evt - event type
* @data - event specific data (usually skb)
*
* This callback is called by IPA driver for exception packets from USB.
* All exception packets are handled by Q6 and should not reach this function.
* Packets will arrive to AP exception pipe only in case where packets are
* sent from USB before Q6 has setup the call.
*/
static void teth_bridge_ipa_cb(void *priv, enum ipa_dp_evt_type evt,
unsigned long data)
{
struct sk_buff *skb = (struct sk_buff *)data;
TETH_DBG_FUNC_ENTRY();
if (evt != IPA_RECEIVE) {
TETH_ERR("unexpected event %d\n", evt);
WARN_ON(1);
return;
}
TETH_ERR("Unexpected exception packet from USB, dropping packet\n");
dev_kfree_skb_any(skb);
TETH_DBG_FUNC_EXIT();
}
/**
* ipa3_teth_bridge_init() - Initialize the Tethering bridge driver
* @params - in/out params for USB initialization API (please look at struct
* definition for more info)
*
* USB driver gets a pointer to a callback function (usb_notify_cb) and an
* associated data. USB driver installs this callback function in the call to
* ipa3_connect().
*
* Builds IPA resource manager dependency graph.
*
* Return codes: 0: success,
* -EINVAL - Bad parameter
* Other negative value - Failure
*/
int ipa3_teth_bridge_init(struct teth_bridge_init_params *params)
{
TETH_DBG_FUNC_ENTRY();
if (!params) {
TETH_ERR("Bad parameter\n");
TETH_DBG_FUNC_EXIT();
return -EINVAL;
}
params->usb_notify_cb = teth_bridge_ipa_cb;
params->private_data = NULL;
params->skip_ep_cfg = true;
TETH_DBG_FUNC_EXIT();
return 0;
}
/**
* ipa3_teth_bridge_disconnect() - Disconnect tethering bridge module
*/
int ipa3_teth_bridge_disconnect(enum ipa_client_type client)
{
TETH_DBG_FUNC_ENTRY();
ipa_rm_delete_dependency(IPA_RM_RESOURCE_USB_PROD,
IPA_RM_RESOURCE_Q6_CONS);
ipa_rm_delete_dependency(IPA_RM_RESOURCE_Q6_PROD,
IPA_RM_RESOURCE_USB_CONS);
TETH_DBG_FUNC_EXIT();
return 0;
}
/**
* ipa3_teth_bridge_connect() - Connect bridge for a tethered Rmnet / MBIM call
* @connect_params: Connection info
*
* Return codes: 0: success
* -EINVAL: invalid parameters
* -EPERM: Operation not permitted as the bridge is already
* connected
*/
int ipa3_teth_bridge_connect(struct teth_bridge_connect_params *connect_params)
{
int res = 0;
TETH_DBG_FUNC_ENTRY();
/* Build the dependency graph, first add_dependency call is sync
* in order to make sure the IPA clocks are up before we continue
* and notify the USB driver it may continue.
*/
res = ipa_rm_add_dependency_sync(IPA_RM_RESOURCE_USB_PROD,
IPA_RM_RESOURCE_Q6_CONS);
if (res < 0) {
TETH_ERR("ipa_rm_add_dependency() failed.\n");
goto bail;
}
/* this add_dependency call can't be sync since it will block until USB
* status is connected (which can happen only after the tethering
* bridge is connected), the clocks are already up so the call doesn't
* need to block.
*/
res = ipa_rm_add_dependency(IPA_RM_RESOURCE_Q6_PROD,
IPA_RM_RESOURCE_USB_CONS);
if (res < 0 && res != -EINPROGRESS) {
ipa_rm_delete_dependency(IPA_RM_RESOURCE_USB_PROD,
IPA_RM_RESOURCE_Q6_CONS);
TETH_ERR("ipa_rm_add_dependency() failed.\n");
goto bail;
}
res = 0;
bail:
TETH_DBG_FUNC_EXIT();
return res;
}
static long ipa3_teth_bridge_ioctl(struct file *filp,
unsigned int cmd,
unsigned long arg)
{
IPAERR("No ioctls are supported!\n");
return -ENOIOCTLCMD;
}
static const struct file_operations ipa3_teth_bridge_drv_fops = {
.owner = THIS_MODULE,
.unlocked_ioctl = ipa3_teth_bridge_ioctl,
};
/**
* ipa3_teth_bridge_driver_init() - Initialize tethering bridge driver
*
*/
int ipa3_teth_bridge_driver_init(void)
{
int res;
TETH_DBG("Tethering bridge driver init\n");
ipa3_teth_ctx = kzalloc(sizeof(*ipa3_teth_ctx), GFP_KERNEL);
if (!ipa3_teth_ctx) {
TETH_ERR("kzalloc err.\n");
return -ENOMEM;
}
ipa3_teth_ctx->class = class_create(THIS_MODULE, TETH_BRIDGE_DRV_NAME);
res = alloc_chrdev_region(&ipa3_teth_ctx->dev_num, 0, 1,
TETH_BRIDGE_DRV_NAME);
if (res) {
TETH_ERR("alloc_chrdev_region err.\n");
res = -ENODEV;
goto fail_alloc_chrdev_region;
}
ipa3_teth_ctx->dev = device_create(ipa3_teth_ctx->class,
NULL,
ipa3_teth_ctx->dev_num,
ipa3_teth_ctx,
TETH_BRIDGE_DRV_NAME);
if (IS_ERR(ipa3_teth_ctx->dev)) {
TETH_ERR(":device_create err.\n");
res = -ENODEV;
goto fail_device_create;
}
cdev_init(&ipa3_teth_ctx->cdev, &ipa3_teth_bridge_drv_fops);
ipa3_teth_ctx->cdev.owner = THIS_MODULE;
ipa3_teth_ctx->cdev.ops = &ipa3_teth_bridge_drv_fops;
res = cdev_add(&ipa3_teth_ctx->cdev, ipa3_teth_ctx->dev_num, 1);
if (res) {
TETH_ERR(":cdev_add err=%d\n", -res);
res = -ENODEV;
goto fail_cdev_add;
}
TETH_DBG("Tethering bridge driver init OK\n");
return 0;
fail_cdev_add:
device_destroy(ipa3_teth_ctx->class, ipa3_teth_ctx->dev_num);
fail_device_create:
unregister_chrdev_region(ipa3_teth_ctx->dev_num, 1);
fail_alloc_chrdev_region:
kfree(ipa3_teth_ctx);
ipa3_teth_ctx = NULL;
return res;
}
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("Tethering bridge driver");

View File

@@ -0,0 +1,2 @@
obj-$(CONFIG_IPA_UT) += ipa_ut_mod.o
ipa_ut_mod-y := ipa_ut_framework.o ipa_test_example.o ipa_test_mhi.o

View File

@@ -0,0 +1,99 @@
/* Copyright (c) 2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include "ipa_ut_framework.h"
/**
* Example IPA Unit-test suite
* To be a reference for writing new suites and tests.
* This suite is also used as unit-test for the testing framework itself.
* Structure:
* 1- Define the setup and teardown functions
* Not Mandatory. Null may be used as well
* 2- For each test, define its Run() function
* 3- Use IPA_UT_DEFINE_SUITE_START() to start defining the suite
* 4- use IPA_UT_ADD_TEST() for adding tests within
* the suite definition block
* 5- IPA_UT_DEFINE_SUITE_END() close the suite definition
*/
static int ipa_test_example_dummy;
static int ipa_test_example_suite_setup(void **ppriv)
{
IPA_UT_DBG("Start Setup - set 0x1234F\n");
ipa_test_example_dummy = 0x1234F;
*ppriv = (void *)&ipa_test_example_dummy;
return 0;
}
static int ipa_test_example_teardown(void *priv)
{
IPA_UT_DBG("Start Teardown\n");
IPA_UT_DBG("priv=0x%p - value=0x%x\n", priv, *((int *)priv));
return 0;
}
static int ipa_test_example_test1(void *priv)
{
IPA_UT_LOG("priv=0x%p - value=0x%x\n", priv, *((int *)priv));
ipa_test_example_dummy++;
return 0;
}
static int ipa_test_example_test2(void *priv)
{
IPA_UT_LOG("priv=0x%p - value=0x%x\n", priv, *((int *)priv));
ipa_test_example_dummy++;
return 0;
}
static int ipa_test_example_test3(void *priv)
{
IPA_UT_LOG("priv=0x%p - value=0x%x\n", priv, *((int *)priv));
ipa_test_example_dummy++;
return 0;
}
static int ipa_test_example_test4(void *priv)
{
IPA_UT_LOG("priv=0x%p - value=0x%x\n", priv, *((int *)priv));
ipa_test_example_dummy++;
IPA_UT_TEST_FAIL_REPORT("failed on test");
return -EFAULT;
}
/* Suite definition block */
IPA_UT_DEFINE_SUITE_START(example, "Example suite",
ipa_test_example_suite_setup, ipa_test_example_teardown)
{
IPA_UT_ADD_TEST(test1, "This is test number 1",
ipa_test_example_test1, false, IPA_HW_v1_0, IPA_HW_MAX),
IPA_UT_ADD_TEST(test2, "This is test number 2",
ipa_test_example_test2, false, IPA_HW_v1_0, IPA_HW_MAX),
IPA_UT_ADD_TEST(test3, "This is test number 3",
ipa_test_example_test3, false, IPA_HW_v1_1, IPA_HW_v2_6),
IPA_UT_ADD_TEST(test4, "This is test number 4",
ipa_test_example_test4, false, IPA_HW_v1_1, IPA_HW_MAX),
} IPA_UT_DEFINE_SUITE_END(example);

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,240 @@
/* Copyright (c) 2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _IPA_UT_FRAMEWORK_H_
#define _IPA_UT_FRAMEWORK_H_
#include <linux/kernel.h>
#include "../ipa_common_i.h"
#include "ipa_ut_i.h"
#define IPA_UT_DRV_NAME "ipa_ut"
#define IPA_UT_DBG(fmt, args...) \
do { \
pr_debug(IPA_UT_DRV_NAME " %s:%d " fmt, \
__func__, __LINE__, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf(), \
IPA_UT_DRV_NAME " %s:%d " fmt, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
IPA_UT_DRV_NAME " %s:%d " fmt, ## args); \
} while (0)
#define IPA_UT_DBG_LOW(fmt, args...) \
do { \
pr_debug(IPA_UT_DRV_NAME " %s:%d " fmt, \
__func__, __LINE__, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
IPA_UT_DRV_NAME " %s:%d " fmt, ## args); \
} while (0)
#define IPA_UT_ERR(fmt, args...) \
do { \
pr_err(IPA_UT_DRV_NAME " %s:%d " fmt, \
__func__, __LINE__, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf(), \
IPA_UT_DRV_NAME " %s:%d " fmt, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
IPA_UT_DRV_NAME " %s:%d " fmt, ## args); \
} while (0)
#define IPA_UT_INFO(fmt, args...) \
do { \
pr_info(IPA_UT_DRV_NAME " %s:%d " fmt, \
__func__, __LINE__, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf(), \
IPA_UT_DRV_NAME " %s:%d " fmt, ## args); \
IPA_IPC_LOGGING(ipa_get_ipc_logbuf_low(), \
IPA_UT_DRV_NAME " %s:%d " fmt, ## args); \
} while (0)
/**
* struct ipa_ut_tst_fail_report - Information on test failure
* @valid: When a test posts a report, valid will be marked true
* @file: File name containing the failed test.
* @line: Number of line in the file where the test failed.
* @func: Function where the test failed in.
* @info: Information about the failure.
*/
struct ipa_ut_tst_fail_report {
bool valid;
const char *file;
int line;
const char *func;
const char *info;
};
/**
* Report on test failure
* To be used by tests to report a point were a test fail.
* Failures are saved in a stack manner.
* Dumping the failure info will dump the fail reports
* from all the function in the calling stack
*/
#define IPA_UT_TEST_FAIL_REPORT(__info) \
do { \
extern struct ipa_ut_tst_fail_report \
_IPA_UT_TEST_FAIL_REPORT_DATA \
[_IPA_UT_TEST_FAIL_REPORT_SIZE]; \
extern u32 _IPA_UT_TEST_FAIL_REPORT_IDX; \
struct ipa_ut_tst_fail_report *entry; \
if (_IPA_UT_TEST_FAIL_REPORT_IDX >= \
_IPA_UT_TEST_FAIL_REPORT_SIZE) \
break; \
entry = &(_IPA_UT_TEST_FAIL_REPORT_DATA \
[_IPA_UT_TEST_FAIL_REPORT_IDX]); \
entry->file = __FILENAME__; \
entry->line = __LINE__; \
entry->func = __func__; \
if (__info) \
entry->info = __info; \
else \
entry->info = ""; \
_IPA_UT_TEST_FAIL_REPORT_IDX++; \
} while (0)
/**
* To be used by tests to log progress and ongoing information
* Logs are not printed to user, but saved to a buffer.
* I/S shall print the buffer at different occasions - e.g. in test failure
*/
#define IPA_UT_LOG(fmt, args...) \
do { \
extern char *_IPA_UT_TEST_LOG_BUF_NAME; \
char __buf[512]; \
IPA_UT_DBG(fmt, ## args); \
if (!_IPA_UT_TEST_LOG_BUF_NAME) {\
pr_err(IPA_UT_DRV_NAME " %s:%d " fmt, \
__func__, __LINE__, ## args); \
break; \
} \
scnprintf(__buf, sizeof(__buf), \
" %s:%d " fmt, \
__func__, __LINE__, ## args); \
strlcat(_IPA_UT_TEST_LOG_BUF_NAME, __buf, \
_IPA_UT_TEST_LOG_BUF_SIZE); \
} while (0)
/**
* struct ipa_ut_suite_meta - Suite meta-data
* @name: Suite unique name
* @desc: Suite description
* @setup: Setup Call-back of the suite
* @teardown: Teardown Call-back of the suite
* @priv: Private pointer of the suite
*
* Setup/Teardown will be called once for the suite when running a tests of it.
* priv field is shared between the Setup/Teardown and the tests
*/
struct ipa_ut_suite_meta {
char *name;
char *desc;
int (*setup)(void **ppriv);
int (*teardown)(void *priv);
void *priv;
};
/* Test suite data structure declaration */
struct ipa_ut_suite;
/**
* struct ipa_ut_test - Test information
* @name: Test name
* @desc: Test description
* @run: Test execution call-back
* @run_in_regression: To run this test as part of regression?
* @min_ipa_hw_ver: Minimum IPA H/W version where the test is supported?
* @max_ipa_hw_ver: Maximum IPA H/W version where the test is supported?
* @suite: Pointer to suite containing this test
* @res: Test execution result. Will be updated after running a test as part
* of suite tests run
*/
struct ipa_ut_test {
char *name;
char *desc;
int (*run)(void *priv);
bool run_in_regression;
int min_ipa_hw_ver;
int max_ipa_hw_ver;
struct ipa_ut_suite *suite;
enum ipa_ut_test_result res;
};
/**
* struct ipa_ut_suite - Suite information
* @meta_data: Pointer to meta-data structure of the suite
* @tests: Pointer to array of tests belongs to the suite
* @tests_cnt: Number of tests
*/
struct ipa_ut_suite {
struct ipa_ut_suite_meta *meta_data;
struct ipa_ut_test *tests;
size_t tests_cnt;
};
/**
* Add a test to a suite.
* Will add entry to tests array and update its info with
* the given info, thus adding new test.
*/
#define IPA_UT_ADD_TEST(__name, __desc, __run, __run_in_regression, \
__min_ipa_hw_ver, __max_ipa__hw_ver) \
{ \
.name = #__name, \
.desc = __desc, \
.run = __run, \
.run_in_regression = __run_in_regression, \
.min_ipa_hw_ver = __min_ipa_hw_ver, \
.max_ipa_hw_ver = __max_ipa__hw_ver, \
.suite = NULL, \
}
/**
* Declare a suite
* Every suite need to be declared before it is registered.
*/
#define IPA_UT_DECLARE_SUITE(__name) \
extern struct ipa_ut_suite _IPA_UT_SUITE_DATA(__name)
/**
* Register a suite
* Registering a suite is mandatory so it will be considered.
*/
#define IPA_UT_REGISTER_SUITE(__name) \
(&_IPA_UT_SUITE_DATA(__name))
/**
* Start/End suite definition
* Will create the suite global structures and adds adding tests to it.
* Use IPA_UT_ADD_TEST() with these macros to add tests when defining
* a suite
*/
#define IPA_UT_DEFINE_SUITE_START(__name, __desc, __setup, __teardown) \
static struct ipa_ut_suite_meta _IPA_UT_SUITE_META_DATA(__name) = \
{ \
.name = #__name, \
.desc = __desc, \
.setup = __setup, \
.teardown = __teardown, \
}; \
static struct ipa_ut_test _IPA_UT_SUITE_TESTS(__name)[] =
#define IPA_UT_DEFINE_SUITE_END(__name) \
; \
struct ipa_ut_suite _IPA_UT_SUITE_DATA(__name) = \
{ \
.meta_data = &_IPA_UT_SUITE_META_DATA(__name), \
.tests = _IPA_UT_SUITE_TESTS(__name), \
.tests_cnt = ARRAY_SIZE(_IPA_UT_SUITE_TESTS(__name)), \
}
#endif /* _IPA_UT_FRAMEWORK_H_ */

View File

@@ -0,0 +1,88 @@
/* Copyright (c) 2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _IPA_UT_I_H_
#define _IPA_UT_I_H_
/* Suite data global structure name */
#define _IPA_UT_SUITE_DATA(__name) ipa_ut_ ##__name ##_data
/* Suite meta-data global structure name */
#define _IPA_UT_SUITE_META_DATA(__name) ipa_ut_ ##__name ##_meta_data
/* Suite global array of tests */
#define _IPA_UT_SUITE_TESTS(__name) ipa_ut_ ##__name ##_tests
/* Global array of all suites */
#define _IPA_UT_ALL_SUITES ipa_ut_all_suites_data
/* Meta-test "all" name - test to run all tests in given suite */
#define _IPA_UT_RUN_ALL_TEST_NAME "all"
/**
* Meta-test "regression" name -
* test to run all regression tests in given suite
*/
#define _IPA_UT_RUN_REGRESSION_TEST_NAME "regression"
/* Test Log buffer name and size */
#define _IPA_UT_TEST_LOG_BUF_NAME ipa_ut_tst_log_buf
#define _IPA_UT_TEST_LOG_BUF_SIZE 8192
/* Global structure for test fail execution result information */
#define _IPA_UT_TEST_FAIL_REPORT_DATA ipa_ut_tst_fail_report_data
#define _IPA_UT_TEST_FAIL_REPORT_SIZE 5
#define _IPA_UT_TEST_FAIL_REPORT_IDX ipa_ut_tst_fail_report_data_index
/* Start/End definitions of the array of suites */
#define IPA_UT_DEFINE_ALL_SUITES_START \
static struct ipa_ut_suite *_IPA_UT_ALL_SUITES[] =
#define IPA_UT_DEFINE_ALL_SUITES_END
/**
* Suites iterator - Array-like container
* First index, number of elements and element fetcher
*/
#define IPA_UT_SUITE_FIRST_INDEX 0
#define IPA_UT_SUITES_COUNT \
ARRAY_SIZE(_IPA_UT_ALL_SUITES)
#define IPA_UT_GET_SUITE(__index) \
_IPA_UT_ALL_SUITES[__index]
/**
* enum ipa_ut_test_result - Test execution result
* @IPA_UT_TEST_RES_FAIL: Test executed and failed
* @IPA_UT_TEST_RES_SUCCESS: Test executed and succeeded
* @IPA_UT_TEST_RES_SKIP: Test was not executed.
*
* When running all tests in a suite, a specific test could
* be skipped and not executed. For example due to mismatch of
* IPA H/W version.
*/
enum ipa_ut_test_result {
IPA_UT_TEST_RES_FAIL,
IPA_UT_TEST_RES_SUCCESS,
IPA_UT_TEST_RES_SKIP,
};
/**
* enum ipa_ut_meta_test_type - Type of suite meta-test
* @IPA_UT_META_TEST_ALL: Represents all tests in suite
* @IPA_UT_META_TEST_REGRESSION: Represents all regression tests in suite
*/
enum ipa_ut_meta_test_type {
IPA_UT_META_TEST_ALL,
IPA_UT_META_TEST_REGRESSION,
};
#endif /* _IPA_UT_I_H_ */

Some files were not shown because too many files have changed in this diff Show More