Compare commits

118 Commits
vic ... bka

Author SHA1 Message Date
wulan17
4f82e79ddb arch: arm64: configs: Enable ThinLTO
Signed-off-by: wulan17 <wulan17@nusantararom.org>
2025-12-17 09:33:40 +01:00
Kevin Park
07dbeefe8f GPUCORE-36665 Fix OOB issue on KBASE_IOCTL_CS_TILER_HEAP_INIT
'group_id' member of the ioctl (KBASE_IOCTL_CS_TILER_HEAP_INIT) struct
must be validated before initializing CSF tiler heap.
Otherwise out-of-boundary of memory group pools array for the CSF tiler
heap could happen and will potentially lead to kernel panic.

TI2: 933204 (DDK Precommit)
TI2: 933199 (BASE_CSF_TEST)

Bug: 259061568
Test: verified fix using poc
Provenance: https://code.ipdelivery.arm.com/c/GPU/mali-ddk/+/4766
Change-Id: I209a3d5152a34c278c17383e4aa9080aa9735822
(cherry picked from commit 55b44117111bf6a7e324301cbbf4f89669fa04c3)
2025-12-14 18:28:47 +00:00
Akash Goel
379ddcf6d2 GPUCORE-36251: Make HeapContext GPU VA to be GPU cacheline aligned
Customer reported an issue where an unexpected GPU page fault happened
due to Tiler trying to access the chunk that was already freed by the
Userspace. The issue was root caused to cacheline sharing between the
HeapContexts of 2 Tiler heaps of the same Kbase context.

The page fault occurred for an Application that made use of more than 1
GPU queue group where one of the group, and its corresponding Tiler heap
instance, is created and destroyed multiple times over the lifetime of
Application.

Kbase sub-allocates memory for a HeapContext from a 4KB page that is
mapped as cached on GPU side, and the memory for HeapContext is zeroed
on allocation through an uncached CPU mapping.
Since the size of HeapContext is 32 bytes, 2 HeapContexts (corresponding
to 2 Tiler heaps of the same context) can end up sharing the same GPU
cacheline (which is 64 bytes in size).

GPU page fault occurred as FW found a non NULL or stale value for the
'free_list_head' pointer in the HeapContext, even though the Heap was
newly created, and so FW assumed a free chunk is available and passed
the address of it to the Tiler and didn't raise an OoM event for Host.
The stale value was found as the zeroing of new HeapContext's memory on
allocation got lost due to the eviction of cacheline from L2 cache.
The cacheline became dirty when FW had updated the contents of older
HeapContext (sharing the cacheline with new HeapContext) on CSG suspend
operation.

This commit makes the GPU VA of HeapContext to be GPU cacheline aligned
to avoid cacheline sharing. The alignment would suffice and there is no
explicit cache flush needed when HeapContext is freed, as whole GPU
cache would anyways be flushed on Odin & Turse GPUs when the initial
chunks are freed just before the HeapContext is freed.

Provenance: https://code.ipdelivery.arm.com/c/GPU/mali-ddk/+/4724/
Test: Boot to home
Bug: 259523790
Change-Id: Ie9e8bffcadbd2ca7705dcd44f9be76754e28138d
Signed-off-by: Jeremy Kemp <jeremykemp@google.com>
2025-12-14 18:26:14 +00:00
Akash Goel
d89fdf55b1 GPUCORE-35070: Order write to JOB_IRQ_CLEAR reg with read from iface mem
There was an issue on Turse platform where sometimes Kbase misses the
CSG idle event notification from FW. This happens when FW sends back to
back notifications for the same CSG for events like SYNC_UPDATE and IDLE
but Kbase gets a single IRQ and observes only the first event.

The issue was root caused to a barrier missing on Kbase side between the
write to JOB_IRQ_CLEAR register and the read from interface memory, i.e.
CSG_ACK. Without the barrier there is no guarantee about the ordering,
the write to JOB_IRQ_CLEAR can take effect after the read from interface
memory.
The ordering is needed considering the way FW & Kbase writes to the
JOB_IRQ_RAWSTAT & JOB_IRQ_CLEAR registers without any synchronization.

This commit adds dmb(osh) barrier after the write to JOB_IRQ_CLEAR to
resolve the issue.

TI2: 896668 (PLAN-12467r490 TGT CS Nightly, few CSF scenarios)
Bug: 243913790
Test: SST ~4600 hours
Provenance: https://code.ipdelivery.arm.com/c/GPU/mali-ddk/+/3841
Change-Id: I094a3b55c8ae28e8126057cdaf81990f62cd388e
(cherry picked from commit 220d89fd264b11a5b68290c3ca5a8c232e1d45db)
2025-12-14 18:16:52 +00:00
Paul E. McKenney
1cea7dab2a rcu: Report error for bad rcu_nocbs= parameter values
This commit prints a console message when cpulist_parse() reports a
bad list of CPUs, and sets all CPUs' bits in that case.  The reason for
setting all CPUs' bits is that this is the safe(r) choice for real-time
workloads, which would normally be the ones using the rcu_nocbs= kernel
boot parameter.  Either way, later RCU console log messages list the
actual set of CPUs whose RCU callbacks will be offloaded.

Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
Signed-off-by: celtare21 <celtare21@gmail.com>
Signed-off-by: Fiqri Ardyansyah <fiqri15072019@gmail.com>
Signed-off-by: Edwiin Kusuma Jaya <kutemeikito0905@gmail.com>
2025-12-14 18:01:37 +00:00
Paul E. McKenney
cdb948743a rcu: Allow rcu_nocbs= to specify all CPUs
Currently, the rcu_nocbs= kernel boot parameter requires that a specific
list of CPUs be specified, and has no way to say "all of them".
As noted by user RavFX in a comment to Phoronix topic 1002538, this
is an inconvenient side effect of the removal of the RCU_NOCB_CPU_ALL
Kconfig option.  This commit therefore enables the rcu_nocbs= kernel boot
parameter to be given the string "all", as in "rcu_nocbs=all" to specify
that all CPUs on the system are to have their RCU callbacks offloaded.

Another approach would be to make cpulist_parse() check for "all", but
there are uses of cpulist_parse() that do other checking, which could
conflict with an "all".  This commit therefore focuses on the specific
use of cpulist_parse() in rcu_nocb_setup().

Just a note to other people who would like changes to Linux-kernel RCU:
If you send your requests to me directly, they might get fixed somewhat
faster.  RavFX's comment was posted on January 22, 2018 and I first saw
it on March 5, 2019.  And the only reason that I found it -at- -all- was
that I was looking for projects using RCU, and my search engine showed
me that Phoronix comment quite by accident.  Your choice, though!  ;-)

Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
Signed-off-by: celtare21 <celtare21@gmail.com>
Signed-off-by: Fiqri Ardyansyah <fiqri15072019@gmail.com>
Signed-off-by: Edwiin Kusuma Jaya <kutemeikito0905@gmail.com>
2025-12-14 18:01:17 +00:00
Wilco Dijkstra
af15843f65 arm64: Use optimized memcmp
Patch written by Wilco Dijkstra submitted for review to newlib:
https://sourceware.org/ml/newlib/2017/msg00524.html

This is an optimized memcmp for AArch64.  This is a complete rewrite
using a different algorithm.  The previous version split into cases
where both inputs were aligned, the inputs were mutually aligned and
unaligned using a byte loop.  The new version combines all these cases,
while small inputs of less than 8 bytes are handled separately.

This allows the main code to be sped up using unaligned loads since
there are now at least 8 bytes to be compared.  After the first 8 bytes,
align the first input.  This ensures each iteration does at most one
unaligned access and mutually aligned inputs behave as aligned.
After the main loop, process the last 8 bytes using unaligned accesses.

This improves performance of (mutually) aligned cases by 25% and
unaligned by >500% (yes >6 times faster) on large inputs.

2017-06-28  Wilco Dijkstra  <wdijkstr@arm.com>

        * bionic/libc/arch-arm64/generic/bionic/memcmp.S (memcmp):
                Rewrite of optimized memcmp.

GLIBC benchtests/bench-memcmp.c performance comparison for Cortex-A53:

Length    1, alignment  1/ 1:        153%
Length    1, alignment  1/ 1:        119%
Length    1, alignment  1/ 1:        154%
Length    2, alignment  2/ 2:        121%
Length    2, alignment  2/ 2:        140%
Length    2, alignment  2/ 2:        121%
Length    3, alignment  3/ 3:        105%
Length    3, alignment  3/ 3:        105%
Length    3, alignment  3/ 3:        105%
Length    4, alignment  4/ 4:        155%
Length    4, alignment  4/ 4:        154%
Length    4, alignment  4/ 4:        161%
Length    5, alignment  5/ 5:        173%
Length    5, alignment  5/ 5:        173%
Length    5, alignment  5/ 5:        173%
Length    6, alignment  6/ 6:        145%
Length    6, alignment  6/ 6:        145%
Length    6, alignment  6/ 6:        145%
Length    7, alignment  7/ 7:        125%
Length    7, alignment  7/ 7:        125%
Length    7, alignment  7/ 7:        125%
Length    8, alignment  8/ 8:        111%
Length    8, alignment  8/ 8:        130%
Length    8, alignment  8/ 8:        124%
Length    9, alignment  9/ 9:        160%
Length    9, alignment  9/ 9:        160%
Length    9, alignment  9/ 9:        150%
Length   10, alignment 10/10:        170%
Length   10, alignment 10/10:        137%
Length   10, alignment 10/10:        150%
Length   11, alignment 11/11:        160%
Length   11, alignment 11/11:        160%
Length   11, alignment 11/11:        160%
Length   12, alignment 12/12:        146%
Length   12, alignment 12/12:        168%
Length   12, alignment 12/12:        156%
Length   13, alignment 13/13:        167%
Length   13, alignment 13/13:        167%
Length   13, alignment 13/13:        173%
Length   14, alignment 14/14:        167%
Length   14, alignment 14/14:        168%
Length   14, alignment 14/14:        168%
Length   15, alignment 15/15:        168%
Length   15, alignment 15/15:        173%
Length   15, alignment 15/15:        173%
Length    1, alignment  0/ 0:        134%
Length    1, alignment  0/ 0:        127%
Length    1, alignment  0/ 0:        119%
Length    2, alignment  0/ 0:        94%
Length    2, alignment  0/ 0:        94%
Length    2, alignment  0/ 0:        106%
Length    3, alignment  0/ 0:        82%
Length    3, alignment  0/ 0:        87%
Length    3, alignment  0/ 0:        82%
Length    4, alignment  0/ 0:        115%
Length    4, alignment  0/ 0:        115%
Length    4, alignment  0/ 0:        122%
Length    5, alignment  0/ 0:        127%
Length    5, alignment  0/ 0:        119%
Length    5, alignment  0/ 0:        127%
Length    6, alignment  0/ 0:        103%
Length    6, alignment  0/ 0:        100%
Length    6, alignment  0/ 0:        100%
Length    7, alignment  0/ 0:        82%
Length    7, alignment  0/ 0:        91%
Length    7, alignment  0/ 0:        87%
Length    8, alignment  0/ 0:        111%
Length    8, alignment  0/ 0:        124%
Length    8, alignment  0/ 0:        124%
Length    9, alignment  0/ 0:        136%
Length    9, alignment  0/ 0:        136%
Length    9, alignment  0/ 0:        136%
Length   10, alignment  0/ 0:        136%
Length   10, alignment  0/ 0:        135%
Length   10, alignment  0/ 0:        136%
Length   11, alignment  0/ 0:        136%
Length   11, alignment  0/ 0:        136%
Length   11, alignment  0/ 0:        135%
Length   12, alignment  0/ 0:        136%
Length   12, alignment  0/ 0:        136%
Length   12, alignment  0/ 0:        136%
Length   13, alignment  0/ 0:        135%
Length   13, alignment  0/ 0:        136%
Length   13, alignment  0/ 0:        136%
Length   14, alignment  0/ 0:        136%
Length   14, alignment  0/ 0:        136%
Length   14, alignment  0/ 0:        136%
Length   15, alignment  0/ 0:        136%
Length   15, alignment  0/ 0:        136%
Length   15, alignment  0/ 0:        136%
Length    4, alignment  0/ 0:        115%
Length    4, alignment  0/ 0:        115%
Length    4, alignment  0/ 0:        115%
Length   32, alignment  0/ 0:        127%
Length   32, alignment  7/ 2:        395%
Length   32, alignment  0/ 0:        127%
Length   32, alignment  0/ 0:        127%
Length    8, alignment  0/ 0:        111%
Length    8, alignment  0/ 0:        124%
Length    8, alignment  0/ 0:        124%
Length   64, alignment  0/ 0:        128%
Length   64, alignment  6/ 4:        475%
Length   64, alignment  0/ 0:        131%
Length   64, alignment  0/ 0:        134%
Length   16, alignment  0/ 0:        128%
Length   16, alignment  0/ 0:        119%
Length   16, alignment  0/ 0:        128%
Length  128, alignment  0/ 0:        129%
Length  128, alignment  5/ 6:        475%
Length  128, alignment  0/ 0:        130%
Length  128, alignment  0/ 0:        129%
Length   32, alignment  0/ 0:        126%
Length   32, alignment  0/ 0:        126%
Length   32, alignment  0/ 0:        126%
Length  256, alignment  0/ 0:        127%
Length  256, alignment  4/ 8:        545%
Length  256, alignment  0/ 0:        126%
Length  256, alignment  0/ 0:        128%
Length   64, alignment  0/ 0:        171%
Length   64, alignment  0/ 0:        171%
Length   64, alignment  0/ 0:        174%
Length  512, alignment  0/ 0:        126%
Length  512, alignment  3/10:        585%
Length  512, alignment  0/ 0:        126%
Length  512, alignment  0/ 0:        127%
Length  128, alignment  0/ 0:        129%
Length  128, alignment  0/ 0:        128%
Length  128, alignment  0/ 0:        129%
Length 1024, alignment  0/ 0:        125%
Length 1024, alignment  2/12:        611%
Length 1024, alignment  0/ 0:        126%
Length 1024, alignment  0/ 0:        126%
Length  256, alignment  0/ 0:        128%
Length  256, alignment  0/ 0:        127%
Length  256, alignment  0/ 0:        128%
Length 2048, alignment  0/ 0:        125%
Length 2048, alignment  1/14:        625%
Length 2048, alignment  0/ 0:        125%
Length 2048, alignment  0/ 0:        125%
Length  512, alignment  0/ 0:        126%
Length  512, alignment  0/ 0:        127%
Length  512, alignment  0/ 0:        127%
Length 4096, alignment  0/ 0:        125%
Length 4096, alignment  0/16:        125%
Length 4096, alignment  0/ 0:        125%
Length 4096, alignment  0/ 0:        125%
Length 1024, alignment  0/ 0:        126%
Length 1024, alignment  0/ 0:        126%
Length 1024, alignment  0/ 0:        126%
Length 8192, alignment  0/ 0:        125%
Length 8192, alignment 63/18:        636%
Length 8192, alignment  0/ 0:        125%
Length 8192, alignment  0/ 0:        125%
Length   16, alignment  1/ 2:        317%
Length   16, alignment  1/ 2:        317%
Length   16, alignment  1/ 2:        317%
Length   32, alignment  2/ 4:        395%
Length   32, alignment  2/ 4:        395%
Length   32, alignment  2/ 4:        398%
Length   64, alignment  3/ 6:        475%
Length   64, alignment  3/ 6:        475%
Length   64, alignment  3/ 6:        477%
Length  128, alignment  4/ 8:        479%
Length  128, alignment  4/ 8:        479%
Length  128, alignment  4/ 8:        479%
Length  256, alignment  5/10:        543%
Length  256, alignment  5/10:        539%
Length  256, alignment  5/10:        543%
Length  512, alignment  6/12:        585%
Length  512, alignment  6/12:        585%
Length  512, alignment  6/12:        585%
Length 1024, alignment  7/14:        611%
Length 1024, alignment  7/14:        611%
Length 1024, alignment  7/14:        611%

Signed-off-by: Francisco Franco <franciscofranco.1990@gmail.com>
Signed-off-by: kdrag0n <dragon@khronodragon.com>
Signed-off-by: utsavbalar1231 <utsavbalar1231@gmail.com>
Signed-off-by: Fiqri Ardyansyah <fiqri15072019@gmail.com>
Signed-off-by: Edwiin Kusuma Jaya <kutemeikito0905@gmail.com>
2025-12-14 17:58:46 +00:00
Yuanyuan Zhong
0d53b9a549 arm64: strcmp: align to 64B cache line
Align strcmp to 64B. This will ensure the preformance critical
loop is within one 64B cache line.

Change-Id: I9240fbb4407637b2290a44e02ad59098a377b356
Signed-off-by: Yuanyuan Zhong <zyy@motorola.com>
Reviewed-on: https://gerrit.mot.com/902536
SME-Granted: SME Approvals Granted
SLTApproved: Slta Waiver <sltawvr@motorola.com>
Tested-by: Jira Key <jirakey@motorola.com>
Reviewed-by: Yi-Wei Zhao <gbjc64@motorola.com>
Reviewed-by: Igor Kovalenko <igork@motorola.com>
Submit-Approved: Jira Key <jirakey@motorola.com>
Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: Nauval Rizky <enuma.alrizky@gmail.com>
Signed-off-by: Fiqri Ardyansyah <fiqri15072019@gmail.com>
Signed-off-by: Edwiin Kusuma Jaya <kutemeikito0905@gmail.com>
2025-12-14 17:58:38 +00:00
Rafael Ortolan
244a84a926 driver/usb: Fix buffer overflow issue detected by KASAN
Fix stack-out-of-bounds issue detected by KASAN, which could result
in random kernel memory corruptions:

[685:tcpc_event_type]==================================================================
[685:tcpc_event_type]BUG: KASAN: stack-out-of-bounds in mt6360_transmit+0xec/0x260
[685:tcpc_event_type]Write of size 28 at addr ffffffe6ca09f963 by task tcpc_event_type/685
[685:tcpc_event_type]
[685:tcpc_event_type]CPU: 1 PID: 685 Comm: tcpc_event_type Tainted: G S      W  O    4.14.186+ #1
[685:tcpc_event_type]Hardware name: MT6853V/NZA (DT)
[685:tcpc_event_type]Call trace:
[685:tcpc_event_type] dump_backtrace+0x0/0x374
[685:tcpc_event_type] show_stack+0x20/0x2c
[685:tcpc_event_type] dump_stack+0x148/0x1b8
[685:tcpc_event_type] print_address_description+0x70/0x248
[685:tcpc_event_type] __kasan_report+0x150/0x180
[685:tcpc_event_type] kasan_report+0x10/0x18
[685:tcpc_event_type] check_memory_region+0x18c/0x198
[685:tcpc_event_type] memcpy+0x48/0x68
[685:tcpc_event_type] mt6360_transmit+0xec/0x260
[685:tcpc_event_type] tcpci_transmit+0xb8/0xe4
[685:tcpc_event_type] pd_send_message+0x238/0x388
[685:tcpc_event_type] pd_reply_svdm_request+0x1f0/0x2f8
[685:tcpc_event_type] pd_dpm_ufp_request_id_info+0xcc/0x188
[685:tcpc_event_type] pe_ufp_vdm_get_identity_entry+0x1c/0x28
[685:tcpc_event_type] pd_handle_event+0x3cc/0x74c
[685:tcpc_event_type] pd_policy_enGine_run+0x18c/0x748
[685:tcpc_event_type] tcpc_event_thread_fn+0x1b4/0x32c
[685:tcpc_event_type] kthread+0x2a8/0x2c0
[685:tcpc_event_type] ret_from_fork+0x10/0x18
[685:tcpc_event_type]==================================================================

Change-Id: I25ee1b2457592d470619f3bea1fb3fc1a2bc678c
Reviewed-on: https://gerrit.mot.com/2320832
SME-Granted: SME Approvals Granted
SLTApproved: Slta Waiver
Reviewed-by: Murilo Alves <alvesm@motorola.com>
Reviewed-by: Gilberto Gambugge Neto <gambugge@motorola.com>
Tested-by: Jira Key
Submit-Approved: Jira Key
Signed-off-by: Murilo Alves <alvesm@motorola.com>
Reviewed-on: https://gerrit.mot.com/2334041
Reviewed-by: Rafael Ortolan <rafones@motorola.com>
Reviewed-by: Zhihong Kang <kangzh@motorola.com>
2025-12-13 16:31:14 +00:00
Damien Le Moal
b7f275383a block: Expose queue nr_zones in sysfs
Expose through sysfs the nr_zones field of struct request_queue.
Exposing this value helps in debugging disk issues as well as
facilitating scripts based use of the disk (e.g. blktests).

For zoned block devices, the nr_zones field indicates the total number
of zones of the device calculated using the known disk capacity and
zone size. This number of zones is always 0 for regular block devices.

Since nr_zones is defined conditionally with CONFIG_BLK_DEV_ZONED,
introduce the blk_queue_nr_zones() function to return the correct value
for any device, regardless if CONFIG_BLK_DEV_ZONED is set.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-12-08 00:52:09 +00:00
Damien Le Moal
2f17d2875c block: Improve zone reset execution
There is no need to synchronously execute all REQ_OP_ZONE_RESET BIOs
necessary to reset a range of zones. Similarly to what is done for
discard BIOs in blk-lib.c, all zone reset BIOs can be chained and
executed asynchronously and a synchronous call done only for the last
BIO of the chain.

Modify blkdev_reset_zones() to operate similarly to
blkdev_issue_discard() using the next_bio() helper for chaining BIOs. To
avoid code duplication of that function in blk_zoned.c, rename
next_bio() into blk_next_bio() and declare it as a block internal
function in blk.h.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-12-08 00:52:09 +00:00
Damien Le Moal
f243b64a11 block: Introduce BLKGETNRZONES ioctl
Get a zoned block device total number of zones. The device can be a
partition of the whole device. The number of zones is always 0 for
regular block devices.

Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-12-08 00:52:08 +00:00
Damien Le Moal
6645652532 block: Introduce BLKGETZONESZ ioctl
Get a zoned block device zone size in number of 512 B sectors.
The zone size is always 0 for regular block devices.

Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-12-08 00:52:08 +00:00
Damien Le Moal
6a2c25e507 block: Limit allocation of zone descriptors for report zones
There is no point in allocating more zone descriptors than the number of
zones a block device has for doing a zone report. Avoid doing that in
blkdev_report_zones_ioctl() by limiting the number of zone decriptors
allocated internally to process the user request.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-12-08 00:52:08 +00:00
Damien Le Moal
2fe6c878c8 block: Introduce blkdev_nr_zones() helper
Introduce the blkdev_nr_zones() helper function to get the total
number of zones of a zoned block device. This number is always 0 for a
regular block device (q->limits.zoned == BLK_ZONED_NONE case).

Replace hard-coded number of zones calculation in dmz_get_zoned_device()
with a call to this helper.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-12-08 00:52:08 +00:00
Omar Sandoval
5da7123f99 kyber: fix integer overflow of latency targets on 32-bit
NSEC_PER_SEC has type long, so 5 * NSEC_PER_SEC is calculated as a long.
However, 5 seconds is 5,000,000,000 nanoseconds, which overflows a
32-bit long. Make sure all of the targets are calculated as 64-bit
values.

Fixes: 6e25cb01ea20 ("kyber: implement improved heuristics")
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-12-08 00:52:08 +00:00
Omar Sandoval
512db70a52 kyber: add tracepoints
When debugging Kyber, it's really useful to know what latencies we've
been having, how the domain depths have been adjusted, and if we've
actually been throttling. Add three tracepoints, kyber_latency,
kyber_adjust, and kyber_throttled, to record that.

Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-12-08 00:52:08 +00:00
Omar Sandoval
4b06e8872a kyber: implement improved heuristics
Kyber's current heuristics have a few flaws:

- It's based on the mean latency, but p99 latency tends to be more
  meaningful to anyone who cares about latency. The mean can also be
  skewed by rare outliers that the scheduler can't do anything about.
- The statistics calculations are purely time-based with a short window.
  This works for steady, high load, but is more sensitive to outliers
  with bursty workloads.
- It only considers the latency once an I/O has been submitted to the
  device, but the user cares about the time spent in the kernel, as
  well.

These are shortcomings of the generic blk-stat code which doesn't quite
fit the ideal use case for Kyber. So, this replaces the statistics with
a histogram used to calculate percentiles of total latency and I/O
latency, which we then use to adjust depths in a slightly more
intelligent manner:

- Sync and async writes are now the same domain.
- Discards are a separate domain.
- Domain queue depths are scaled by the ratio of the p99 total latency
  to the target latency (e.g., if the p99 latency is double the target
  latency, we will double the queue depth; if the p99 latency is half of
  the target latency, we can halve the queue depth).
- We use the I/O latency to determine whether we should scale queue
  depths down: we will only scale down if any domain's I/O latency
  exceeds the target latency, which is an indicator of congestion in the
  device.

These new heuristics are just as scalable as the heuristics they
replace.

Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-12-08 00:52:08 +00:00
Omar Sandoval
95c81bbedc kyber: don't make domain token sbitmap larger than necessary
The domain token sbitmaps are currently initialized to the device queue
depth or 256, whichever is larger, and immediately resized to the
maximum depth for that domain (256, 128, or 64 for read, write, and
other, respectively). The sbitmap is never resized larger than that, so
it's unnecessary to allocate a bitmap larger than the maximum depth.
Let's just allocate it to the maximum depth to begin with. This will use
marginally less memory, and more importantly, give us a more appropriate
number of bits per sbitmap word.

Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-12-08 00:52:07 +00:00
Omar Sandoval
0a9c2ef26a block: move call of scheduler's ->completed_request() hook
Commit 4bc6339a58 ("block: move blk_stat_add() to
__blk_mq_end_request()") consolidated some calls using ktime_get() so
we'd only need to call it once. Kyber's ->completed_request() hook also
calls ktime_get(), so let's move it to the same place, too.

Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2025-12-08 00:52:07 +00:00
claxten10
2d3c7708e0 arch: arm64: configs: Enable Kyber I/O sched
Signed-off-by: claxten10 <claxten10@gmail.com>
2025-12-08 00:52:07 +00:00
Roman Gushchin
e8f74fb113 mm: memcg/slab: generalize postponed non-root kmem_cache deactivation
Currently SLUB uses a work scheduled after an RCU grace period to
deactivate a non-root kmem_cache.  This mechanism can be reused for
kmem_caches release, but requires generalization for SLAB case.

Introduce kmemcg_cache_deactivate() function, which calls
allocator-specific __kmem_cache_deactivate() and schedules execution of
__kmem_cache_deactivate_after_rcu() with all necessary locks in a worker
context after an rcu grace period.

Here is the new calling scheme:
  kmemcg_cache_deactivate()
    __kmemcg_cache_deactivate()                  SLAB/SLUB-specific
    kmemcg_rcufn()                               rcu
      kmemcg_workfn()                            work
        __kmemcg_cache_deactivate_after_rcu()    SLAB/SLUB-specific

instead of:
  __kmemcg_cache_deactivate()                    SLAB/SLUB-specific
    slab_deactivate_memcg_cache_rcu_sched()      SLUB-only
      kmemcg_rcufn()                             rcu
        kmemcg_workfn()                          work
          kmemcg_cache_deact_after_rcu()         SLUB-only

For consistency, all allocator-specific functions start with "__".

Link: http://lkml.kernel.org/r/20190611231813.3148843-4-guro@fb.com
Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Vladimir Davydov <vdavydov.dev@gmail.com>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Waiman Long <longman@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Andrei Vagin <avagin@gmail.com>
Cc: Qian Cai <cai@lca.pw>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2025-12-08 00:52:07 +00:00
Sultan Alsawaf
5ac450d018 arch: arm64: configs: Disable SLUB per-CPU partial caches
CONFIG_SLUB_CPU_PARTIAL is not set

This causes load spikes when the per-CPU partial caches are filled and
need to be drained, which is bad for maintaining low latency.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2025-12-08 00:52:07 +00:00
Manaf Meethalavalappu Pallikunhi
04450f04ff arch: arm64: configs: Enable powercap framework
CONFIG_POWERCAP=y

It enables the power capping sysfs interface for
different power zone devices.

Bug: 220884335
Change-Id: I11bc3efe06d2a02dcc602d223d3e6757088ca771
Signed-off-by: Manaf Meethalavalappu Pallikunhi <quic_manafm@quicinc.com>
2025-12-08 00:52:07 +00:00
Ocean Chen
6fe001fbc3 arch: arm64: configs: Enable zram-writeback
CONFIG_ZRAM_WRITEBACK=y

Bug: 142299185
Change-Id: Id9a928d436a3069c32e7569bfddc6da79beee3c2
Signed-off-by: Ocean Chen <oceanchen@google.com>
2025-12-08 00:52:07 +00:00
Paul Zhang
1537524516 arch: arm64: configs: Disable CONFIG_CFG80211_CRDA_SUPPORT
CONFIG_CFG80211_CRDA_SUPPORT is not set

Since CRDA is not supported, disable CONFIG_CFG80211_CRDA_SUPPORT
by default.

Change-Id: I01bde48aea21612b9d5c79b11931999e02d610b4
CRs-Fixed: 2946898
Signed-off-by: Paul Zhang <paulz@codeaurora.org>
2025-12-08 00:52:06 +00:00
Nathan Chancellor
6c5709097a kernel/profile: Use cpumask_available to check for NULL cpumask
When building with clang + -Wtautological-pointer-compare, these
instances pop up:

  kernel/profile.c:339:6: warning: comparison of array 'prof_cpu_mask' not equal to a null pointer is always true [-Wtautological-pointer-compare]
          if (prof_cpu_mask != NULL)
              ^~~~~~~~~~~~~    ~~~~
  kernel/profile.c:376:6: warning: comparison of array 'prof_cpu_mask' not equal to a null pointer is always true [-Wtautological-pointer-compare]
          if (prof_cpu_mask != NULL)
              ^~~~~~~~~~~~~    ~~~~
  kernel/profile.c:406:26: warning: comparison of array 'prof_cpu_mask' not equal to a null pointer is always true [-Wtautological-pointer-compare]
          if (!user_mode(regs) && prof_cpu_mask != NULL &&
                                ^~~~~~~~~~~~~    ~~~~
  3 warnings generated.

This can be addressed with the cpumask_available helper, introduced in
commit f7e30f0 ("cpumask: Add helper cpumask_available()") to fix
warnings like this while keeping the code the same.

Link: ClangBuiltLinux#747
Link: http://lkml.kernel.org/r/20191022191957.9554-1-natechancellor@gmail.com
Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2025-12-08 00:52:06 +00:00
Davidlohr Bueso
47bb162e2c kernel/sched/core: Add branch prediction hint to wake_q_add() cmpxchg
The cmpxchg() will fail when the task is already in the process
of waking up, and as such is an extremely rare occurrence.
Micro-optimize the call and put an unlikely() around it.

To no surprise, when using CONFIG_PROFILE_ANNOTATED_BRANCHES
under a number of workloads the incorrect rate was a mere 1-2%.

Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Waiman Long <longman@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Yongji Xie <elohimes@gmail.com>
Cc: andrea.parri@amarulasolutions.com
Cc: lilin24@baidu.com
Cc: liuqi16@baidu.com
Cc: nixun@baidu.com
Cc: xieyongji@baidu.com
Cc: yuanlinsi01@baidu.com
Cc: zhangyu31@baidu.com
Link: https://lkml.kernel.org/r/20181203053130.gwkw6kg72azt2npb@linux-r8p5
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2025-12-08 00:52:06 +00:00
Li zeming
0a3492d5ec kernel/time/alarmtimer: Remove unnecessary initialization of variable 'ret'
ret is assigned before checked, so it does not need to initialize the
variable

Signed-off-by: Li zeming <zeming@nfschina.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20230609182856.4660-1-zeming@nfschina.com
2025-12-08 00:52:06 +00:00
Li zeming
3fd6a03917 kernel/time/alarmtimer: Remove unnecessary (void *) cast
Pointers of type void * do not require a type cast when they are assigned
to a real pointer.

Signed-off-by: Li zeming <zeming@nfschina.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20230609182059.4509-1-zeming@nfschina.com
2025-12-08 00:52:06 +00:00
Tetsuo Handa
884c45b53e kernel: Initialize cpumask before parsing
KMSAN complains that new_value at cpumask_parse_user() from
write_irq_affinity() from irq_affinity_proc_write() is uninitialized.

  [  148.133411][ T5509] =====================================================
  [  148.135383][ T5509] BUG: KMSAN: uninit-value in find_next_bit+0x325/0x340
  [  148.137819][ T5509]
  [  148.138448][ T5509] Local variable ----new_value.i@irq_affinity_proc_write created at:
  [  148.140768][ T5509]  irq_affinity_proc_write+0xc3/0x3d0
  [  148.142298][ T5509]  irq_affinity_proc_write+0xc3/0x3d0
  [  148.143823][ T5509] =====================================================

Since bitmap_parse() from cpumask_parse_user() calls find_next_bit(),
any alloc_cpumask_var() + cpumask_parse_user() sequence has possibility
that find_next_bit() accesses uninitialized cpu mask variable. Fix this
problem by replacing alloc_cpumask_var() with zalloc_cpumask_var().

Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Link: https://lore.kernel.org/r/20210401055823.3929-1-penguin-kernel@I-love.SAKURA.ne.jp
2025-12-08 00:52:06 +00:00
Philippe Liard
9bc2ed82df fs/squashfs: Migrate from ll_rw_block usage to BIO
ll_rw_block() function has been deprecated in favor of BIO which appears
to come with large performance improvements.

This patch decreases boot time by close to 40% when using squashfs for
the root file-system.  This is observed at least in the context of
starting an Android VM on Chrome OS using crosvm.  The patch was tested
on 4.19 as well as master.

This patch is largely based on Adrien Schildknecht's patch that was
originally sent as https://lkml.org/lkml/2017/9/22/814 though with some
significant changes and simplifications while also taking Phillip
Lougher's feedback into account, around preserving support for
FILE_CACHE in particular.

[akpm@linux-foundation.org: fix build error reported by Randy]
  Link: http://lkml.kernel.org/r/319997c2-5fc8-f889-2ea3-d913308a7c1f@infradead.org
Signed-off-by: Philippe Liard <pliard@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Adrien Schildknecht <adrien+dev@schischi.me>
Cc: Phillip Lougher <phillip@squashfs.org.uk>
Cc: Guenter Roeck <groeck@chromium.org>
Cc: Daniel Rosenberg <drosen@google.com>
Link: https://chromium.googlesource.com/chromiumos/platform/crosvm
Link: http://lkml.kernel.org/r/20191106074238.186023-1-pliard@google.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2025-12-08 00:52:05 +00:00
Alexander Winkowski
bd9e610f6d mm/page_alloc: Disable pcp lists checks on !DEBUG_VM
Reference: https://lore.kernel.org/all/20230201162549.68384-1-halbuer@sra.uni-hannover.de/T/#m2d0dccbb7653a8761a657ee046766dcd56e35df9

Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
2025-12-08 00:52:05 +00:00
Minchan Kim
01a6275c21 arch: arm64: configs: Disable CONFIG_MEMCG and MEMCG_SWAP
CONFIG_MEMCG is not set

Pixel doesn't use the memcg but it hurts 15% performance in minor
fault benchmark so disable it until we see strong reason.

Bug: 169443770
Signed-off-by: Minchan Kim <minchan@google.com>
Change-Id: Ifd9ddcd54559c590260d52f60a2e5e4b79c5480f
2025-12-08 00:52:05 +00:00
Frederic Weisbecker
e00a2dfe71 rcu: Assume rcu_init() is called before smp
The rcu_init() function is called way before SMP is initialized and
therefore only the boot CPU should be online at this stage.

Simplify the boot per-cpu initialization accordingly.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Cc: Neeraj Upadhyay <quic_neeraju@quicinc.com>
Cc: Uladzislau Rezki <uladzislau.rezki@sony.com>
Cc: Joel Fernandes <joel@joelfernandes.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2025-12-08 00:52:05 +00:00
Mashopy
a0425d99f8 arch: arm64: boot: dts: Remove initcall_debug=1 for MT6781
This is a production kernel, not a debug one.
2025-12-08 00:52:05 +00:00
Cyrill Gorcunov
c9e54c78d7 rcu: rcu_qs -- Use raise_softirq_irqoff to not save irqs twice
The rcu_qs is disabling IRQs by self so no need to do the same in raise_softirq
but instead we can save some cycles using raise_softirq_irqoff directly.

CC: Paul E. McKenney <paulmck@linux.ibm.com>
Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
2025-12-08 00:52:05 +00:00
Paul E. McKenney
04904ffe37 rcu/tiny: Convert to SPDX license identifier
Replace the license boiler plate with a SPDX license identifier.
While in the area, update an email address.

Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
2025-12-08 00:52:05 +00:00
Paul E. McKenney
3dd3836e93 rcu: Rename rcu_check_callbacks() to rcu_sched_clock_irq()
The name rcu_check_callbacks() arguably made sense back in the early
2000s when RCU was quite a bit simpler than it is today, but it has
become quite misleading, especially with the advent of dyntick-idle
and NO_HZ_FULL.  The rcu_check_callbacks() function is RCU's hook into
the scheduling-clock interrupt, and is now but one of many ways that
callbacks get promoted to invocable state.

This commit therefore changes the name to rcu_sched_clock_irq(),
which is the same number of characters and clearly indicates this
function's relation to the rest of the Linux kernel.  In addition, for
the sake of consistency, rcu_flavor_check_callbacks() is also renamed
to rcu_flavor_sched_clock_irq().

While in the area, the header comments for both functions are reworked.

Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
2025-12-08 00:52:04 +00:00
Paul E. McKenney
20743a0645 srcu: Make call_srcu() available during very early boot
Event tracing is moving to SRCU in order to take advantage of the fact
that SRCU may be safely used from idle and even offline CPUs.  However,
event tracing can invoke call_srcu() very early in the boot process,
even before workqueue_init_early() is invoked (let alone rcu_init()).
Therefore, call_srcu()'s attempts to queue work fail miserably.

This commit therefore detects this situation, and refrains from attempting
to queue work before rcu_init() time, but does everything else that it
would have done, and in addition, adds the srcu_struct to a global list.
The rcu_init() function now invokes a new srcu_init() function, which
is empty if CONFIG_SRCU=n.  Otherwise, srcu_init() queues work for
each srcu_struct on the list.  This all happens early enough in boot
that there is but a single CPU with interrupts disabled, which allows
synchronization to be dispensed with.

Of course, the queued work won't actually be invoked until after
workqueue_init() is invoked, which happens shortly after the scheduler
is up and running.  This means that although call_srcu() may be invoked
any time after per-CPU variables have been set up, there is still a very
narrow window when synchronize_srcu() won't work, and this window
extends from the time that the scheduler starts until the time that
workqueue_init() returns.  This can be fixed in a manner similar to
the fix for synchronize_rcu_expedited() and friends, but until someone
actually needs to use synchronize_srcu() during this window, this fix
is added churn for no benefit.

Finally, note that Tree SRCU's new srcu_init() function invokes
queue_work() rather than the queue_delayed_work() function that is
invoked post-boot.  The reason is that queue_delayed_work() will (as you
would expect) post a timer, and timers have not yet been initialized.
So use of queue_work() avoids the complaints about use of uninitialized
spinlocks that would otherwise result.  Besides, some delay is already
provide by the aforementioned fact that the queued work won't actually
be invoked until after the scheduler is up and running.

Requested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Tested-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2025-12-08 00:52:04 +00:00
Paul E. McKenney
071728046f rcu: Motivate Tiny RCU forward progress
If a long-running CPU-bound in-kernel task invokes call_rcu(), the
callback won't be invoked until the next context switch.  If there are
no other runnable tasks (which is not an uncommon situation on deep
embedded systems), the callback might never be invoked.

This commit therefore causes rcu_check_callbacks() to ask the scheduler
for a context switch if there are callbacks posted that are still waiting
for a grace period.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
2025-12-08 00:52:04 +00:00
Paul E. McKenney
1e9c40c21d rcu: Clean up flavor-related definitions and comments in tiny.c
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
2025-12-08 00:52:04 +00:00
Paul E. McKenney
8cb39e4b1e rcu: Express Tiny RCU updates in terms of RCU rather than RCU-sched
This commit renames Tiny RCU functions so that the lowest level of
functionality is RCU (e.g., synchronize_rcu()) rather than RCU-sched
(e.g., synchronize_sched()).  This provides greater naming compatibility
with Tree RCU, which will in turn permit more LoC removal once
the RCU-sched and RCU-bh update-side API is removed.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
[ paulmck: Fix Tiny call_rcu()'s EXPORT_SYMBOL() in response to a bug
  report from kbuild test robot. ]
2025-12-08 00:52:04 +00:00
Paul E. McKenney
c10274c5a7 rcu: Define RCU-sched API in terms of RCU for Tree RCU PREEMPT builds
Now that RCU-preempt knows about preemption disabling, its implementation
of synchronize_rcu() works for synchronize_sched(), and likewise for the
other RCU-sched update-side API members.  This commit therefore confines
the RCU-sched update-side code to CONFIG_PREEMPT=n builds, and defines
RCU-sched's update-side API members in terms of those of RCU-preempt.

This means that any given build of the Linux kernel has only one
update-side flavor of RCU, namely RCU-preempt for CONFIG_PREEMPT=y builds
and RCU-sched for CONFIG_PREEMPT=n builds.  This in turn means that kernels
built with CONFIG_RCU_NOCB_CPU=y have only one rcuo kthread per CPU.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Andi Kleen <ak@linux.intel.com>
2025-12-08 00:52:04 +00:00
Paul E. McKenney
d3b896077a rcu: Define RCU-bh update API in terms of RCU
Now that the main RCU API knows about softirq disabling and softirq's
quiescent states, the RCU-bh update code can be dispensed with.
This commit therefore removes the RCU-bh update-side implementation and
defines RCU-bh's update-side API in terms of that of either RCU-preempt or
RCU-sched, depending on the setting of the CONFIG_PREEMPT Kconfig option.

In kernels built with CONFIG_RCU_NOCB_CPU=y this has the knock-on effect
of reducing by one the number of rcuo kthreads per CPU.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
2025-12-08 00:52:04 +00:00
Aeron-Aeron
1b014dbe07 perf: make MediaTek perf observer suspend-aware and reduce wakeups
* Converted the observer polling to be suspend-aware so it stops pinging every
  32 ms. Added a pob_timer_active flag, suspend/resume notifier, and bumped the
  interval to 64 ms. The hrtimer callback now quietly backs off when disabled.

* Maybe now the CPU can finally enjoy its beauty sleep.

Signed-off-by: Aeron-Aeron <aeronrules2@gmail.com>
2025-12-08 00:52:03 +00:00
Woomymy
54404ec743 kernel: irq_work: Remove mediatek schedule monitor support
Change-Id: I4cf9879d9e8eb605f37e50fcde089b32ef6e7c9d
2025-12-08 00:52:03 +00:00
Mashopy
3db111c587 gen4m: Let scheduler handle cpu boosting
Change-Id: I06ea48ce6663563c8240a6196bc85a6b6fc0b43c
Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
2025-12-08 00:52:03 +00:00
Cyber Knight
4f7b123d6d connectivity/wlan-core-gen4m: Bump 2.4GHz hotspot bandwidth to 40mhz
- This should improve the reliability of 2.4GHz hotspot connections.

Change-Id: Iea450301518d22701c35040a2581cb37d2d39ccf
Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
2025-12-08 00:52:03 +00:00
kehaizhou
cd86d8762a [ALPS09502745] mgmt: Fix kernel panic due to hardware watchdog
[Description]
Modified debug logging to prevent kernel panic caused by hardware
watchdog when handling IRQ from wifi module.

[Test]
UT

MTK-Commit-Id: cc6d75fbfacaed326aeaa9fdea03d95fe558a6f3

Signed-off-by: kehaizhou <haizhou.ke@mediatek.com>
CR-Id: ALPS09502745
Feature: Others
Change-Id: I67a0b50a587d1e93c7136a685dcd1b8f0e1f7e89
Reviewed-on: https://gerrit.mediatek.inc/c/neptune/wlan_driver/gen4m/+/9912268
Commit-Check: srv_check_service <srv_check_service@mediatek.com>
AutoUT-Review-Label: srv_neptune_adm <srv_neptune_adm@mediatek.com>
Reviewed-by: shuaishuai.kong <shuaishuai.kong@mediatek.com>
(cherry picked from commit ee220b7700144a6d16e0274fede3029aa2543ac3)
Reviewed-on: https://gerrit.mediatek.inc/c/neptune/wlan_driver/gen4m/+/9921411
Build: srv_preflight_a001 <srv_preflight_a001@mediatek.com>
Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
2025-12-08 00:52:03 +00:00
Wyatt Sun
a88c6c1111 [ALPS09702582] Use flexible array member for RX statistics
[Description]
Use flexible array to avoid false alarm of OOB check

[Test]
Build pass to let auto test try it again.

MTK-Commit-Id: 84dcece8624b4d84d597257f5412acb57306998d

CR-Id: ALPS09702582
Feature: Wi-Fi Driver CONNAC
Change-Id: I0de4f74289513ab2177127336ec190b13223ab82
Signed-off-by: Wyatt Sun <wyatt.sun@mediatek.com>
Reviewed-on: https://gerrit.mediatek.inc/c/neptune/wlan_driver/gen4m/+/9152977
Build: srv_preflight_b001 <srv_preflight_b001@mediatek.com>
Build: srv_preflight_a001 <srv_preflight_a001@mediatek.com>
Build: srv_neptune_adm <srv_neptune_adm@mediatek.com>
Coverity-Review-Label: srv_neptune_adm <srv_neptune_adm@mediatek.com>
Test: srv_pf_nep_sanity
AutoUT-Review-Label: srv_neptune_adm <srv_neptune_adm@mediatek.com>
Test: srv_preflight_a001 <srv_preflight_a001@mediatek.com>
Reviewed-by: jim.chuang <jim.chuang@mediatek.com>
ODB-Check: srv_neptune_adm <srv_neptune_adm@mediatek.com>
Commit-Check: srv_check_service <srv_check_service@mediatek.com>
(cherry picked from commit 7663be91b02c2519f15824e05660e2eda9f90afb)
Reviewed-on: https://gerrit.mediatek.inc/c/neptune/wlan_driver/gen4m/+/9389610
Reviewed-by: holiday.hao <holiday.hao@mediatek.com>
Tested-by: jim.chuang <jim.chuang@mediatek.com>
(cherry picked from commit 87f1b9f39912ab3be49df89f399a29eb8bf59719)
Reviewed-on: https://gerrit.mediatek.inc/c/neptune/wlan_driver/gen4m/+/10128328
Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
2025-12-08 00:52:03 +00:00
Xiang Wu
e72240dce9 [ALPS06528517] WAPI: Support SMS4
Claim support WPI-SMS4 if WAPI enabled.

MTK-Commit-Id: 2e992a329b0ebbc8a9293f69b69fc3e21ed2377e

CR-Id: ALPS06528517
Change-Id: Id3b383ae0d9718fb28f247860ac33bd9e080cf30
Signed-off-by: Xiang Wu <xiang.wu@mediatek.com>
Feature: WAPI
Reviewed-on: https://gerrit.mediatek.inc/c/neptune/wlan_driver/gen4m/+/8429859
ODB-Check: srv_neptune_adm <srv_neptune_adm@mediatek.com>
Commit-Check: srv_check_service <srv_check_service@mediatek.com>
Build: srv_preflight_b001 <srv_preflight_b001@mediatek.com>
Build: srv_pf_nep_win <srv_pf_nep_win@mediatek.com>
Reviewed-by: sticky.chen <sticky.chen@mediatek.com>
Test: srv_preflight_a001 <srv_preflight_a001@mediatek.com>
Build: srv_neptune_adm <srv_neptune_adm@mediatek.com>
Reviewed-by: boforn.lin <boforn.lin@mediatek.com>
Test: srv_mspautosanity
Coverity-Review-Label: srv_neptune_adm <srv_neptune_adm@mediatek.com>
Build: srv_preflight_a001 <srv_preflight_a001@mediatek.com>
AutoUT-Review-Label: srv_neptune_adm <srv_neptune_adm@mediatek.com>
Test: srv_pf_nep_sanity
CodeQL-Review-Label: srv_pf_nep_win <srv_pf_nep_win@mediatek.com>
Reviewed-by: han.hu <han.hu@mediatek.com>
Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
2025-12-08 00:52:03 +00:00
Jake Chung
9225800863 WFSYS: wlanRemove after wlanOnAtReset fail
[Description]
wlanRemove after wlanOnAtReset fail

[Test]
build pass, L0/L0.5 UT pass

Change-Id: Iae88ff10a70d150b5396782be67ab231d5fab987
Mot-CRs-Fixed: (CR)
CR-Id: ALPS09855577
Feature: Wi-Fi Driver CONNAC
Signed-off-by: Jake Chung <jake.chung@mediatek.com>
Signed-off-by: Yue Sun <sunyue5@motorola.com>
Reviewed-on: https://gerrit.mot.com/3329653
SME-Granted: SME Approvals Granted
SLTApproved: Slta Waiver
Tested-by: Jira Key
Reviewed-by: Zhilu Yin <yinzl1@motorola.com>
Submit-Approved: Jira Key
Signed-off-by: Cyber Knight <cyberknight755@gmail.com>
2025-12-08 00:52:02 +00:00
claxten10
45ca229eae drivers: misc: Completely rework AW8622 haptics driver
Signed-off-by: claxten10 <claxten10@gmail.com>
2025-12-08 00:13:55 +00:00
Woomymy
dc81465869 arch: arm64: configs: Disable PD debugging info
Change-Id: I6e4aea368af5cf3c9e60c2a1440810837d4068e4
Signed-off-by: Woomymy <woomy@woomy.be>
2025-12-08 00:13:55 +00:00
Woomymy
764826f6a9 arch: arm64: configs: Use reduced debug info
Change-Id: I6a329a773c0a1bee451217cc5b6a03de0e8e2687
Signed-off-by: Woomymy <woomy@woomy.be>
2025-12-08 00:13:55 +00:00
claxten10
357e5e049a arch: arm64: configs: Disable AEE features
Signed-off-by: claxten10 <claxten10@gmail.com>
2025-12-08 00:13:54 +00:00
bengris32
c5f2753931 arch: arm64: Build connectivity modules inline
Change-Id: I08f90939ca3d5c4e0c6c65a60c31c2cda4f9915d
Signed-off-by: bengris32 <bengris32@protonmail.ch>
2025-12-08 00:13:54 +00:00
Eric Biggers
2d24c9882a arch: arm64: configs: enable BLAKE2b support
Bug: 178411248
[adelva: patched around missing XCBC option on 4.19]
Change-Id: Iec497954d29adcf7193da9ca4b27d61eac7615d9
Signed-off-by: Eric Biggers <ebiggers@google.com>
2025-12-08 00:13:54 +00:00
Rachel Tseng
078cb27882 [ALPS08338404] Dont use PMKID if auth type is SAE
If STA put PMKID in Assoc Req when auth type is SAE, some AP
will reject with invalid PMKID, event the PMKID is correct.
Therefore, dont use PMKID if auth type is SAE.

MTK-Commit-Id: 5f44cb7b7a5067da4bf426c33500abcb7770d729

Change-Id: Ie3c3aea6801a9f1b8ef513a544f48bf3364b835a
CR-Id: ALPS08338404
Feature: Wi-Fi Driver CONNAC
Signed-off-by: junjiang.yu <ot_junjiang.yu@mediatek.com>
Reviewed-on: https://gerrit.mediatek.inc/c/neptune/wlan_driver/gen4m/+/7730698
Test: srv_pf_nep_sanity
Coverity-Review-Label: srv_neptune_adm <srv_neptune_adm@mediatek.com>
Test: srv_preflight_a001 <srv_preflight_a001@mediatek.com>
Reviewed-by: sticky.chen <sticky.chen@mediatek.com>
Reviewed-on: https://gerrit.mediatek.inc/c/neptune/wlan_driver/gen4m/+/7981197
Build: srv_preflight_a001 <srv_preflight_a001@mediatek.com>
AutoUT-Review-Label: srv_neptune_adm <srv_neptune_adm@mediatek.com>
Reviewed-by: rachel.tseng <rachel.tseng@mediatek.com>
Commit-Check: srv_check_service <srv_check_service@mediatek.com>
Reviewed-by: ben.lai <ben.lai@mediatek.com>
2025-12-08 00:13:54 +00:00
sunyue
7dd0e4b2dc wlan: Fix a NULL pointer issue in wlan host driver
In a rare case, PMF wifi router sends sa query request to us before the
connection established(aisUpdateBssInfoForJOIN), which will cause kpanic
because prStaRecOfAP has not been set.

Solution:
Directly return from the function rsnSaQueryRequest without updating the
MDSU_INFO_T and sending sa query response

Change-Id: Ieb643f13dd1203e382881517af6cc7fb8e95c354
Reviewed-on: https://gerrit.mot.com/2060858
SME-Granted: SME Approvals Granted
SLTApproved: Slta Waiver
Tested-by: Jira Key
Reviewed-by: Yue Sun <sunyue5@lenovo.com>
Reviewed-by: Bin Liu <liubin7@motorola.com>
Submit-Approved: Jira Key
2025-12-08 00:13:54 +00:00
claxten10
913616b380 misc: mtk/connectivity: Build gps driver
Signed-off-by: claxten10 <claxten10@gmail.com>
2025-12-08 00:13:54 +00:00
bengris32
4341b860ee conninfra: Suppress spammy verbose logging
Change-Id: I4dcf1ecea571a48f023a992f8a9799df219b75b8
Signed-off-by: bengris32 <bengris32@protonmail.ch>
2025-12-08 00:13:54 +00:00
bengris32
569320a6a6 connectivity: Disable WLAN boost if !CONFIG_MTK_CPU_CTRL
Change-Id: I4bf1df6b600e2a3c3495e1a149a993cf029c57fa
Signed-off-by: bengris32 <bengris32@protonmail.ch>
2025-12-08 00:13:53 +00:00
bengris32
5122e005a0 drivers: connectivity: {connfem,gps}: Build modules into kernel
Change-Id: Ib72fa5910b9e43efa266cd0bd0abaabb223a3b1e
Signed-off-by: bengris32 <bengris32@protonmail.ch>
2025-12-08 00:13:53 +00:00
bengris32
1df79b59a0 drivers: connectivity: gen4m: Silence more debug logging
Change-Id: Ic176c9b20b909b233bf07eb613fb04f842fe2e38
Signed-off-by: bengris32 <bengris32@protonmail.ch>
2025-12-08 00:13:53 +00:00
Vaisakh Murali
b968b5b4b5 drivers: misc/mtk: connectivity-wlan: Queue delayed work on power efficient wq
Power efficient workqueues will help reduce the overall power overhead
incurred by this driver on certain workqueues.

Signed-off-by: Vaisakh Murali <mvaisakh@statixos.com>
Signed-off-by: LinkBoi00 <linkdevel@protonmail.com>
2025-12-08 00:13:53 +00:00
Vaisakh Murali
0a3a3c150b drivers: connectivity: gen4m: Change logging levels
* The logs on this drivers horrendously hog the CPU power,
  affecting performance
* Show only errors.

Change-Id: I8259933219afb13037606fbb51f09cab505f5bbc
Signed-off-by: Vaisakh Murali <mvaisakh@statixos.com>
2025-12-08 00:13:53 +00:00
bengris32
7d7eca7d8e connectivity: Clean-up Makefile
* Clean up Makefile for inline compiling of connectivity modules.
2025-12-08 00:13:53 +00:00
bengris32
8c609c32a2 drivers: connectivity: bt: Don't define module init/exit if built-in to kernel
* The way MediaTek intended the connectivity modules to work when
  built-in to the kernel is to have conninfra initialise all of the
  connectivity modules by itself (Wi-Fi, BT, GPS, FM Radio, etc).

* This initialisation process would be done when conninfra was fully
  initialised and ready to communicate with the other drivers. However
  MediaTek forgot to guard the module_init and module_exit definitions
  with the macro used to compile the driver for built-in usage. This
  causes a race condition where the Bluetooth driver is trying to initialise
  before conninfra is ready, leading to a kernel panic early on due to a
  null pointer dereference.

Change-Id: I77f831b2aed913865b5d77f117fdab9038e956b2
Signed-off-by: bengris32 <bengris32@protonmail.ch>
2025-12-08 00:13:52 +00:00
bengris32
e9c69fdc48 drivers: connectivity: gen4m: Fix built-in config detection
Change-Id: I5e3eaf3d405cf90af1fb98f7a0281bd7a7dc298d
Signed-off-by: bengris32 <bengris32@protonmail.ch>
2025-12-08 00:13:52 +00:00
Erfan Abdi
cfc37e47d1 mediatek: Port connectivity modules for in-kernel building 2025-12-08 00:13:52 +00:00
bengris32
aece3ea5d8 connectivity: Fix function prototype warnings
Change-Id: Ie9f0bb34161a0fbda3202dce0deb1e94215a38c5
Signed-off-by: bengris32 <bengris32@protonmail.ch>
2025-12-08 00:13:52 +00:00
claxten10
e1e5fda7f2 misc: mtk/connectivity: Build BT driver inline
Signed-off-by: claxten10 <claxten10@gmail.com>
2025-12-08 00:13:52 +00:00
Vaisakh Murali
f8482f1818 drivers: connectivity: Add an option to build wlan driver in kernel
* This the way mediatek prefers it, so be it

Signed-off-by: Vaisakh Murali <mvaisakh@statixos.com>
Change-Id: Ie02d6e887a0febad4515162d126abea2014eecf7
2025-12-08 00:13:52 +00:00
bengris32
172ccb900d gen4m: Add NL80211_WPA_VERSION_3 enumeration
Signed-off-by: bengris32 <bengris32@protonmail.ch>
Change-Id: I9fe0aa9d6420380b727532ae054d75097bacd07f
2025-12-08 00:13:52 +00:00
Woomymy
2f9d75c526 drivers: connectivity: common: Force-disable WMT debugging
Signed-off-by: Woomymy <woomy@woomy.be>
Change-Id: Ia4f6b799fc7858e77e05f50c285f6c0151d5c3f5
2025-12-08 00:13:51 +00:00
Woomymy
088e04c09b drivers: connectivity: bt-mt66xx: Disable debugging logs on all variants
Change-Id: I296bf4fdac66bc27ffbbe1dd04b3b6d4e4a7ff92
Signed-off-by: Woomymy <woomy@woomy.be>
2025-12-08 00:13:51 +00:00
zainarbani
f486fe0c02 connectivity: gen4m: Silence logspam
- Same behaviour on stock, shut it up.

Signed-off-by: zainarbani <zaintsyariev@gmail.com>
2025-12-08 00:13:51 +00:00
claxten10
fed8fcd6b8 misc: mtk/connectivity: Remove redefinitions
Signed-off-by: claxten10 <claxten10@gmail.com>
2025-12-08 00:13:51 +00:00
bengris32
7ae4ec216a drivers: connectivity: gen4m: Use PM notifier to control WLAN suspend
Change-Id: Iaa8df18c147b9dc6c940e90de6d98ee2f1cb7f51
Signed-off-by: bengris32 <bengris32@protonmail.ch>
2025-12-08 00:13:51 +00:00
Erfan Abdi
08378fd86f drivers: misc/mtk: connectivity: wlan: Fix wifi random disconnections
Change-Id: Id00b452996363a14d127f6f720bf0a00a8c167ee
Signed-off-by: LinkBoi00 <linkdevel@protonmail.com>
2025-12-08 00:13:51 +00:00
bengris32
fd5c84fef6 drivers: connectivity: gen4m: Disable WLAN wakelocks
Change-Id: Ia30adf5adbb2b1b2de001b28a05cfef6186d25d2
Signed-off-by: bengris32 <bengris32@protonmail.ch>
2025-12-08 00:13:51 +00:00
Erfan Abdi
8a434fe5a6 connectivity: Import from BSP modules 2025-12-08 00:13:50 +00:00
Georg Veichtlbauer
0e3f94642a arch: arm64: configs: Enable memory stats
This is actually read by BatteryStats in Android

Change-Id: Ie5717ebf33a2cab5a4f6ab1846b291931477dd95
2025-12-08 00:13:50 +00:00
Hemant Kumar
96c1338bdc arch: arm64: configs: Enable NCM function driver
Enables configfs supported NCM function driver

Change-Id: Ic23796c5a1388c41d533ca0f4fad04d01fe9e965
Signed-off-by: Hemant Kumar <hemantk@codeaurora.org>
2025-12-08 00:13:50 +00:00
Dan Vacura
74828ab332 ANDROID: defconfig: enable CONFIG_USB_CONFIGFS_F_UVC
Enable the UVC function driver to allow USB gadgets
to connect as a standard video device to a host.
2025-12-08 00:13:49 +00:00
geeny
6098341e18 arch: arm64: configs: Enable WireGuard support
Change-Id: I143bab359f49ae4f7e1b560e39a68ddf56fc0400
2025-12-08 00:13:49 +00:00
Woomymy
d31812bcce arch: arm64: configs: Disable SLUB debugging completely
Change-Id: I7a5977c3fb97a546f3e402bedad1f77ff49ece3e
Signed-off-by: Woomymy <woomy@woomy.be>
2025-12-08 00:13:49 +00:00
Woomymy
9db09ed1cf staging: mtk_ion: Silence IONMSG logspam
Change-Id: I7d932a56a6d1fb2eca6a76ed966566b000ae24b8
Signed-off-by: Woomymy <woomy@woomy.be>
2025-12-08 00:13:49 +00:00
Woomymy
c91c616e3c Revert "[ALPS05269737] USB: Enhance RNDIS Performance"
Reason for revert: Mediatek "optimized" RNDIS so well that they
literally broke NCM

This reverts commit 3f2cec825b.

Change-Id: Idf19e3761a9ce31f9a38c357ae758c87afdc0d78
Signed-off-by: Woomymy <woomy@woomy.be>
2025-12-08 00:13:49 +00:00
Woomymy
f144c308f3 Revert "[ALPS05333045] cert: fix 10466153"
This reverts commit 8145844a13.
2025-12-08 00:13:49 +00:00
Woomymy
f8c51ef721 Revert "[ALPS05130667] usb: fix flag logic error"
This reverts commit 639af33ffc.
2025-12-08 00:13:49 +00:00
rogercl.yang
cc3c01ac80 ANDROID: adding __nocfi to cpuidle_enter_state
Background:
  When CPU is going to idle state, it would inform RCU that
current CPU is entering idle through rcu_idle_enter(),
and RCU will ignore read-side critical sections of this CPU.
However, there is CFI check mechanism inside idle flow and
calls rcu_read_lock(), so "rcu_read_lock() used illegally while idle"
in rcu_read_lock() will be triggered because rcu_idle_enter()
was already called before.

  Beside, the pointer of rcu_dereference() might be invalid
due to the RCU read-side critical sections will be ignoring in
this going idle CPU, it might cause problems like:
access the wrong data/address, kernel exception...

Based on above description:
  We will add __nocfi to cpuidle_enter_state to avoid
“rcu_read_lock() used illegally while idle!”
and avoid the usage of invalid pointer of rcu_dereference()
in this situation.

Bug: 169017431
Change-Id: I8bbe25704e18cfde351a8f4277dd4b44b07421f5
Signed-off-by: rogercl.yang <rogercl.yang@mediatek.com>
Signed-off-by: Chun-Hung Wu <chun-hung.wu@mediatek.com>
2025-12-08 00:13:48 +00:00
Sami Tolvanen
3477e31ecf ANDROID: arm64: add __va_function
With CFI, the compiler replaces function references with pointers
to the CFI jump table. This breaks passing these addresses to
code running at EL2, where the jump tables are not valid. Add a
__va_function macro similarly to the earlier __pa_function to take
address of the actual function in inline assembly and use that in
kvm_ksym_ref instead.

Bug: 163385976
Change-Id: I097b99409995512c00786300e7d18fe42c720a1b
(cherry picked from commit 2f4d6c9fd77c88ad0500aad4bf1f64aaf2654c49)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2025-12-08 00:13:48 +00:00
bengris32
297ad09d0a mali_valhall: Remove MediaTek memtrack support
* We'll be using gs101 memtrack from now on.

Change-Id: I2d91e0d57e59549e3f5bf915f428bc9c14136478
Signed-off-by: bengris32 <bengris32@protonmail.ch>
2025-12-08 00:13:48 +00:00
Ankit Goyal
918b305e35 mali_kbase: platform: Add per-process and global sysfs nodes for GPU mem usage
Bug: 191966412
Signed-off-by: Ankit Goyal <layog@google.com>
Change-Id: Id47feadaf9da7ef8e22494ab64e6263d7f87213c
2025-12-08 00:13:48 +00:00
Ankit Goyal
f8c4c26f2a mali_kbase: platform: Add per-process and global accounting for dma-buf pages
This adds dma_buf_pages alongside total_gpu_pages to track GPU
addressable dmabuf pages for each process and for complete device.

Bug: 191966412
Signed-off-by: Ankit Goyal <layog@google.com>
Change-Id: I29da69e469395d30e784ea9c2ffddcf6fab688fd
2025-12-08 00:13:48 +00:00
Kimberly Brown
37494e2bd1 kobject: Add support for default attribute groups to kobj_type
kobj_type currently uses a list of individual attributes to store
default attributes. Attribute groups are more flexible than a list of
attributes because groups provide support for attribute visibility. So,
add support for default attribute groups to kobj_type.

In future patches, the existing uses of kobj_type’s attribute list will
be converted to attribute groups. When that is complete, kobj_type’s
attribute list, “default_attrs”, will be removed.

Signed-off-by: Kimberly Brown <kimbrownkd@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Change-Id: Id6e67b4b7311ee0ced3653220d4d5e86e3f2ede0
2025-12-08 00:13:48 +00:00
Vaisakh Murali
50abf11357 drivers: mtk-perf: Shut up with the spam
* fpsgo is a proprietary kernel driver (yes, these exist in mtk), this line
  keeps spamming the log, masking what I actually want from the logs

Signed-off-by: Vaisakh Murali <vaisakhmurali@gmail.com>
Signed-off-by: zainarbani <zaintsyariev@gmail.com>
2025-12-08 00:13:48 +00:00
TheMalachite
a4307e6e7f arch: arm64: Remove console args from cmdline 2025-12-08 00:13:47 +00:00
kdrag0n
acb8bfc88b arch: arm64: dts: Suppress verbose output during boot
This should make the kernel initialization faster as it suppresses any
potential serial console output.

Signed-off-by: kdrag0n <dragon@khronodragon.com>
2025-12-08 00:13:47 +00:00
Gagan Malvi
e6cc0753d4 arch: arm64: dts: Remove cmdline argument for SLUB debugging.
Signed-off-by: Gagan Malvi <malvigagan@gmail.com>
2025-12-08 00:13:47 +00:00
Arian
f5da0116bd cpufreq: Ensure the minimal frequency is lower than the maximal frequency
* Libperfmgr increases the minimal frequency to 9999999 in order to boost
  the cpu to the maximal frequency. This usally works because it also
  increases the max frequency to 9999999 at init. However if we decrease
  the maximal frequency afterwards, which mi_thermald does, setting the
  minimal frequency to 9999999 fails because it exceeds the maximal
  frequency.

* We can allow setting a minimal frequency higher than the maximal
  frequency and setting a lower maximal frequency than the minimal
  frequency by adjusting the minimal frequency if it exceeds the
  maximal frequency.

Change-Id: I25b7ccde714aac14c8fdb9910857c3bd38c0aa05
2025-12-08 00:13:47 +00:00
Sultan Alsawaf
d17ffb511b sched/fair: Compile out NUMA code entirely when NUMA is disabled
Scheduler code is very hot and every little optimization counts. Instead
of constantly checking sched_numa_balancing when NUMA is disabled,
compile it out.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Change-Id: I7334594fbe835f615a199cfe02ee526135abab06
2025-12-08 00:13:47 +00:00
bengris32
5ec257351b arch: arm64: configs: Disable kernel AAL support
* Not only is AAL broken with our blobs (constantly spamming
  that CONFIG_MTK_AAL_SUPPORT is disabled, even though it isn't),
  this will cause the brightness levels to be forcefully remapped
  to the 0-1024 range.

* Since AAL is broken anyway, just disable it.

Change-Id: Icbb402c435d7af1512d381a0a136d181f064771a
Signed-off-by: bengris32 <bengris32@protonmail.ch>
2025-12-08 00:13:47 +00:00
claxten10
2fd2f31177 misc: mtk/flashlight: Import minimal Xiaomi changes
Signed-off-by: claxten10 <claxten10@gmail.com>
2025-12-08 00:13:47 +00:00
huangsh4
fa95c9f35e misc: mtk/flashlight: optimize for flash light
camera:
optimize for flash light

Change-Id: Ia5c8614944c1554a1bb1771dcbd2d37bc56cfcbf
Signed-off-by: huangsh4 <huangsh4@lenovo.com>
Reviewed-on: https://gerrit.mot.com/1924674
Reviewed-by: Darong Huang <huangdra@motorola.com>
Reviewed-by: Heng Chen <chenheng3@lenovo.com>
Reviewed-by: Shanghui Zhang <zhangsh@motorola.com>
Reviewed-by: Shenhuai Huang <huangsh4@motorola.com>
Reviewed-by: Zhilong Wang <wangzl30@motorola.com>
Reviewed-by: Xu Ji <jixu@motorola.com>
Reviewed-by: Long Cheng <chengl1@motorola.com>
SME-Granted: SME Approvals Granted
SLTApproved: Slta Waiver
Tested-by: Jira Key
Reviewed-by: Zhuoran Xu <xuzr3@motorola.com>
Reviewed-by: Jian Zhang <zhangjo@motorola.com>
Reviewed-by: Zhichao Chen <chenzc2@motorola.com>
Submit-Approved: Jira Key
2025-12-08 00:13:47 +00:00
huangsh4
5aabcd1b1f misc: mtk/flashlight: enable flashlight feature
enable flashlight feature.

Change-Id: I59e570d68d49a48a0bf70ab45f4ecd4d74f4636c
Signed-off-by: huangsh4 <huangsh4@lenovo.com>
Reviewed-on: https://gerrit.mot.com/1906966
SLTApproved: Slta Waiver
SME-Granted: SME Approvals Granted
Submit-Approved: Jira Key
Tested-by: Jira Key
Reviewed-by: Jian Zhang <zhangjo@motorola.com>
Reviewed-by: Zhuoran Xu <xuzr3@motorola.com>
Reviewed-by: Zhilong Wang <wangzl30@motorola.com>
Reviewed-by: Shanghui Zhang <zhangsh@motorola.com>
Reviewed-by: Qiang Guo <guoq8@motorola.com>
Reviewed-by: Long Cheng <chengl1@motorola.com>
Reviewed-by: Zhichao Chen <chenzc2@motorola.com>
2025-12-08 00:13:46 +00:00
claxten10
77ad39bec2 Revert "misc: mtk/flashlight: Import Xiaomi changes"
* Will move to Motorola's newer driver.

This reverts commit d83925de53b5da625bef2c41b95265eed31ccaa9.
2025-12-08 00:13:46 +00:00
6f176e74f6 dts/mt6781: remove duplicate of vdec_gcon
Signed-off-by: Onelots <onelots@onelots.fr>
2025-12-08 00:13:46 +00:00
0918ab9332 dts/mt6781: remove duplicate of venc@17000000
Signed-off-by: Onelots <onelots@onelots.fr>
Co-autored-by: Edrick Sinsuan <evcsinsuan@gmail.com>
2025-12-08 00:13:46 +00:00
a8b425e789 dts/mt6781: uart: disable all uselesses uart nodes
Signed-off-by: Onelots <onelots@onelots.fr>
2025-12-08 00:13:46 +00:00
Sultan Alsawaf
315ce721fb binder: Stub out debug prints by default
Binder code is very hot, so checking frequently to see if a debug
message should be printed is a waste of cycles. We're not debugging
binder, so just stub out the debug prints to compile them out entirely.

Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
2025-12-08 00:13:46 +00:00
kdrag0n
762cfaa762 arm64: debug: disable self-hosted debug by default
Signed-off-by: kdrag0n <dragon@khronodragon.com>
Signed-off-by: celtare21 <celtare21@gmail.com>
2025-12-08 00:13:45 +00:00
John Dias
8ae07fff57 binder: set binder_debug_mask=0 to suppress logging
Excessive logging -- not present on angler -- is affecting
performance, contributing to missed audio deadlines and likely other
latency-dependent tasks.
Bug: 30375418

Change-Id: I88b9c7fa4540ad46e564f44a0e589b5215e8487d
2025-12-08 00:13:45 +00:00
Pzqqt
b5af57206d drivers: scsi: Reduce logspam 2025-12-08 00:13:45 +00:00
claxten10
c0863fa583 arch: arm64: configs: Enable SIA81XX driver
* Used in Indonesian models of fleur.

Signed-off-by: claxten10 <claxten10@gmail.com>
2025-12-08 00:13:45 +00:00
2720 changed files with 2516093 additions and 3975 deletions

View File

@@ -485,7 +485,7 @@ section that the grace period must wait on.
noted by <tt>rcu_node_context_switch()</tt> on the left.
On the other hand, if the CPU takes a scheduler-clock interrupt
while executing in usermode, a quiescent state will be noted by
<tt>rcu_check_callbacks()</tt> on the right.
<tt>rcu_sched_clock_irq()</tt> on the right.
Either way, the passage through a quiescent state will be noted
in a per-CPU variable.
@@ -651,7 +651,7 @@ to end.
These callbacks are identified by <tt>rcu_advance_cbs()</tt>,
which is usually invoked by <tt>__note_gp_changes()</tt>.
As shown in the diagram below, this invocation can be triggered by
the scheduling-clock interrupt (<tt>rcu_check_callbacks()</tt> on
the scheduling-clock interrupt (<tt>rcu_sched_clock_irq()</tt> on
the left) or by idle entry (<tt>rcu_cleanup_after_idle()</tt> on
the right, but only for kernels build with
<tt>CONFIG_RCU_FAST_NO_HZ=y</tt>).

View File

@@ -349,7 +349,7 @@
font-weight="bold"
font-size="192"
id="text202-7-5"
style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rcu_check_callbacks()</text>
style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rcu_sched_clock_irq()</text>
<rect
x="7069.6187"
y="5087.4678"

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 16 KiB

View File

@@ -3902,7 +3902,7 @@
font-style="normal"
y="-4418.6582"
x="3745.7725"
xml:space="preserve">rcu_check_callbacks()</text>
xml:space="preserve">rcu_sched_clock_irq()</text>
</g>
<g
transform="translate(-850.30204,55463.106)"
@@ -4968,7 +4968,7 @@
font-weight="bold"
font-size="192"
id="text202-7-5-19"
style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rcu_check_callbacks()</text>
style="font-size:192px;font-style:normal;font-weight:bold;text-anchor:start;fill:#000000;stroke-width:0.025in;font-family:Courier">rcu_sched_clock_irq()</text>
<rect
x="5314.2671"
y="82817.688"

Before

Width:  |  Height:  |  Size: 209 KiB

After

Width:  |  Height:  |  Size: 209 KiB

View File

@@ -775,7 +775,7 @@
font-style="normal"
y="-4418.6582"
x="3745.7725"
xml:space="preserve">rcu_check_callbacks()</text>
xml:space="preserve">rcu_sched_clock_irq()</text>
</g>
<g
transform="translate(399.7744,828.86448)"

Before

Width:  |  Height:  |  Size: 43 KiB

After

Width:  |  Height:  |  Size: 43 KiB

View File

@@ -3771,7 +3771,9 @@
see CONFIG_RAS_CEC help text.
rcu_nocbs= [KNL]
The argument is a cpu list, as described above.
The argument is a cpu list, as described above,
except that the string "all" can be used to
specify every CPU on the system.
In kernels built with CONFIG_RCU_NOCB_CPU=y, set
the specified list of CPUs to be no-callback CPUs.

View File

@@ -28,11 +28,10 @@
/* chosen */
chosen: chosen {
bootargs = "console=tty0 console=ttyS0,921600n1 root=/dev/ram \
vmalloc=400M slub_debug=OFZPU swiotlb=noforce \
initcall_debug=1 \
bootargs = "root=/dev/ram \
vmalloc=400M swiotlb=noforce \
firmware_class.path=/vendor/firmware \
page_owner=on loop.max_part=7";
page_owner=on quiet loop.max_part=7";
kaslr-seed = <0 0>;
};
@@ -1864,6 +1863,7 @@
apdma: dma-controller@10200d80 {
compatible = "mediatek,mt6577-uart-dma";
status = "disabled";
reg = <0 0x10200d80 0 0x80>,
<0 0x10200e00 0 0x80>,
<0 0x10200e80 0 0x80>,
@@ -1881,6 +1881,7 @@
apuart0: serial@11002000 {
compatible = "mediatek,mt6577-uart";
status = "disabled";
reg = <0 0x11002000 0 0x1000>;
interrupts = <GIC_SPI 112 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk26m>, <&infracfg_ao INFRACFG_AO_UART0_CG>;
@@ -1892,6 +1893,7 @@
apuart1: serial@11003000 {
compatible = "mediatek,mt6577-uart";
status = "disabled";
reg = <0 0x11003000 0 0x1000>;
interrupts = <GIC_SPI 113 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk26m>, <&infracfg_ao INFRACFG_AO_UART1_CG>;
@@ -2286,6 +2288,7 @@
ap_uart2@11018000 {
compatible = "mediatek,ap_uart2";
status = "disabled";
reg = <0 0x11018000 0 0x1000>;
};
@@ -4184,16 +4187,6 @@
"MT_CG_VDEC";
};
vdec_gcon_clk: vdec_gcon@16010000 {
compatible = "mediatek,vdec_gcon";
reg = <0 0x16010000 0 0x8000>;
};
vdec_gcon@16018000 {
compatible = "mediatek,vdec_gcon";
reg = <0 0x16018000 0 0x8000>;
};
venc_gcon: venc_gcon@17000000 {
compatible = "mediatek,venc_gcon",
"mediatek,mt6833-venc_gcon", "syscon";
@@ -4226,12 +4219,6 @@
mediatek,smi-id = <7>;
};
venc@17020000 {
compatible = "mediatek,venc";
reg = <0 0x17020000 0 0x10000>;
interrupts = <GIC_SPI 243 IRQ_TYPE_LEVEL_HIGH>;
};
jpgenc@17030000 {
compatible = "mediatek,jpgenc";
reg = <0 0x17030000 0 0x10000>;

View File

@@ -22,13 +22,14 @@ CONFIG_SCHED_MC=y
CONFIG_NR_CPUS=8
CONFIG_ARM64_DMA_USE_IOMMU=y
CONFIG_RANDOMIZE_BASE=y
CONFIG_CMDLINE="console=tty0 console=ttyMT3,921600n1 root=/dev/ram vmalloc=496M slub_max_order=0 slub_debug=O ramoops_memreserve=4M"
CONFIG_CMDLINE="root=/dev/ram vmalloc=496M slub_max_order=0 slub_debug=- ramoops_memreserve=4M"
# CONFIG_EFI is not set
CONFIG_BUILD_ARM64_APPENDED_DTB_IMAGE=y
CONFIG_BUILD_ARM64_APPENDED_DTB_IMAGE_NAMES="mediatek/mt6781"
CONFIG_BUILD_ARM64_DTB_OVERLAY_IMAGE=y
CONFIG_BUILD_ARM64_DTB_OVERLAY_IMAGE_NAMES="mediatek/fleur"
CONFIG_WQ_POWER_EFFICIENT_DEFAULT=y
# CONFIG_PD_DBG_INFO is not set
CONFIG_ENERGY_MODEL=y
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_STAT=y
@@ -41,6 +42,7 @@ CONFIG_MODULE_SRCVERSION_ALL=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_BLK_INLINE_ENCRYPTION=y
CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK=y
CONFIG_MQ_IOSCHED_KYBER=y
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
CONFIG_ZSMALLOC=y
CONFIG_XFRM_MIGRATE=y
@@ -55,6 +57,7 @@ CONFIG_FW_LOADER_USER_HELPER=y
CONFIG_FW_LOADER_USER_HELPER_FALLBACK=y
# CONFIG_FW_CACHE is not set
CONFIG_ZRAM=y
CONFIG_ZRAM_WRITEBACK=y
CONFIG_BLK_DEV_LOOP_MIN_COUNT=16
CONFIG_ANDROID_DEFAULT_SETTING=y
CONFIG_MTK_ANDROID_DEFAULT_SETTING=y
@@ -67,6 +70,8 @@ CONFIG_MTK_MUSB_DUAL_ROLE=y
CONFIG_MTK_PLATFORM="mt6785"
CONFIG_ARCH_MTK_PROJECT="fleur"
CONFIG_BLK_CGROUP=y
# CONFIG_MEMCG is not set
# CONFIG_MEMCG_SWAP is not set
CONFIG_MTK_BATTERY_OC_POWER_THROTTLING=y
CONFIG_MTK_BATTERY_PERCENTAGE_POWER_THROTTLING=y
CONFIG_MTK_LOW_BATTERY_POWER_THROTTLING=y
@@ -95,7 +100,6 @@ CONFIG_MTK_ROUND_CORNER_SUPPORT=y
CONFIG_LCM_HEIGHT="2400"
CONFIG_LCM_WIDTH="1080"
CONFIG_LEDS_MTK_DISP=y
CONFIG_MTK_AAL_SUPPORT=y
CONFIG_BACKLIGHT_SUPPORT_2047_FEATURE=y
CONFIG_MTK_VDEC_FMT=y
CONFIG_MTK_PSEUDO_M4U=y
@@ -158,6 +162,7 @@ CONFIG_MTK_ECCCI_C2K=y
CONFIG_MTK_BTIF=y
CONFIG_MTK_COMBO=y
CONFIG_MTK_COMBO_CHIP_CONSYS_6781=y
CONFIG_MTK_COMBO_BT=y
CONFIG_MTK_COMBO_GPS=y
CONFIG_MTK_COMBO_WIFI=y
CONFIG_MTK_DHCPV6C_WIFI=y
@@ -165,7 +170,9 @@ CONFIG_MTK_GPS_SUPPORT=y
CONFIG_MTK_GPS_EMI=y
CONFIG_MTK_FMRADIO=y
CONFIG_MTK_FM_CHIP="MT6631_FM"
CONFIG_MTK_CONNFEM=y
CONFIG_MTK_CONNSYS_DEDICATED_LOG_PATH=y
CONFIG_WLAN_DRV_BUILD_IN=y
CONFIG_HAVE_MTK_ENABLE_GENIEZONE=y
CONFIG_MTK_ENABLE_GENIEZONE=y
CONFIG_MTK_GZ_MAIN=y
@@ -191,18 +198,17 @@ CONFIG_MTK_RTC=y
CONFIG_MTK_TINYSYS_SCP_SUPPORT=y
CONFIG_MTK_TINYSYS_SSPM_SUPPORT=y
CONFIG_MTK_TINYSYS_SSPM_V2=y
CONFIG_MTK_AEE_HANGDET=y
CONFIG_MTK_CM_MGR_LEGACY=y
CONFIG_MTK_LEGACY_THERMAL=y
CONFIG_MTK_THERMAL_PA_VIA_ATCMD=y
CONFIG_MTK_CAMERA_ISP_RSC_SUPPORT=y
CONFIG_MTK_CAMERA_ISP_MFB_SUPPORT=y
CONFIG_MTK_CAMERA_ISP_FD_SUPPORT=y
CONFIG_MTK_SELINUX_AEE_WARNING=y
CONFIG_MTK_DRAM_LOG_STORE=y
CONFIG_MTK_DRAM_LOG_STORE_ADDR=0x0011DF00
CONFIG_MTK_DRAM_LOG_STORE_SIZE=0x100
CONFIG_MTK_FREQ_HOPPING=y
CONFIG_MEMORY_STATE_TIME=y
CONFIG_SCSI=y
CONFIG_BLK_DEV_SD=y
CONFIG_SCSI_SCAN_ASYNC=y
@@ -219,11 +225,13 @@ CONFIG_DM_UEVENT=y
CONFIG_DM_VERITY_FEC=y
CONFIG_IFB=y
# CONFIG_ETHERNET is not set
CONFIG_WIREGUARD=y
CONFIG_PPP_FILTER=y
CONFIG_PPP_MULTILINK=y
CONFIG_PPPOE=y
CONFIG_PPP_ASYNC=y
CONFIG_PPP_SYNC_TTY=y
CONFIG_POWERCAP=y
CONFIG_USB_USBNET=y
# CONFIG_KEYBOARD_ATKBD is not set
CONFIG_KEYBOARD_MTK=y
@@ -337,6 +345,8 @@ CONFIG_USB_CONFIGFS_RNDIS=y
CONFIG_USB_CONFIGFS_MASS_STORAGE=y
CONFIG_USB_CONFIGFS_MTK_FASTMETA=y
CONFIG_USB_CONFIGFS_F_HID=y
CONFIG_USB_CONFIGFS_F_UVC=y
CONFIG_USB_CONFIGFS_NCM=y
CONFIG_MMC=y
CONFIG_MMC_BLOCK_MINORS=32
CONFIG_MMC_CRYPTO=y
@@ -394,9 +404,10 @@ CONFIG_STATIC_USERMODEHELPER=y
CONFIG_STATIC_USERMODEHELPER_PATH=""
CONFIG_SECURITY_SELINUX_DEVELOP=y
CONFIG_CRYPTO_TWOFISH=y
CONFIG_CRYPTO_BLAKE2B=y
# CONFIG_CRYPTO_HW is not set
CONFIG_PRINTK_TIME=y
CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_INFO_REDUCED=y
CONFIG_FRAME_WARN=2800
CONFIG_DETECT_HUNG_TASK=y
CONFIG_BOOTPARAM_HUNG_TASK_PANIC=y
@@ -416,6 +427,7 @@ CONFIG_CHARGER_BQ2589X=y
CONFIG_SPI_SPIDEV=y
CONFIG_CPU_THERMAL=y
CONFIG_SND_SOC_FS18XX=y
CONFIG_SND_SOC_SIA81XX=y
CONFIG_I2C_CHARDEV=y
CONFIG_SND_SOC_DSPK_LOL_HP=y
CONFIG_SND_SOC_AW87XXX=y
@@ -435,3 +447,13 @@ CONFIG_EXFAT_DISCARD=y
CONFIG_EXFAT_DEFAULT_CODEPAGE=437
CONFIG_EXFAT_DEFAULT_IOCHARSET="utf8"
CONFIG_ANT_CHECK=y
# LTO
CONFIG_INLINE_OPTIMIZATION=y
CONFIG_LTO=y
CONFIG_LTO_CLANG=y
CONFIG_ARCH_SUPPORTS_LTO_CLANG=y
CONFIG_ARCH_SUPPORTS_THINLTO=y
# CONFIG_LTO_NONE is not set
# CONFIG_LTO_CLANG_FULL is not set
CONFIG_THINLTO=y

View File

@@ -40,9 +40,9 @@
/* Translate a kernel address of @sym into its equivalent linear mapping */
#define kvm_ksym_ref(sym) \
({ \
void *val = &sym; \
void *val = __va_function(sym); \
if (!is_kernel_in_hyp_mode()) \
val = lm_alias(&sym); \
val = lm_alias(val); \
val; \
})

View File

@@ -320,13 +320,15 @@ static inline void *phys_to_virt(phys_addr_t x)
* virtual address. Therefore, use inline assembly to ensure we are
* always taking the address of the actual function.
*/
#define __pa_function(x) ({ \
unsigned long addr; \
#define __va_function(x) ({ \
void *addr; \
asm("adrp %0, " __stringify(x) "\n\t" \
"add %0, %0, :lo12:" __stringify(x) : "=r" (addr)); \
__pa_symbol(addr); \
addr; \
})
#define __pa_function(x) __pa_symbol(__va_function(x))
/*
* virt_to_page(k) convert a _valid_ virtual address to struct page *
* virt_addr_valid(k) indicates whether a virtual address is valid

View File

@@ -64,7 +64,7 @@ NOKPROBE_SYMBOL(mdscr_read);
* Allow root to disable self-hosted debug from userspace.
* This is useful if you want to connect an external JTAG debugger.
*/
static bool debug_enabled = true;
static bool debug_enabled;
static int create_debug_debugfs_entry(void)
{

View File

@@ -1,258 +1,131 @@
/*
* Copyright (C) 2013 ARM Ltd.
* Copyright (C) 2013 Linaro.
* Copyright (c) 2017 ARM Ltd
* All rights reserved.
*
* This code is based on glibc cortex strings work originally authored by Linaro
* and re-licensed under GPLv2 for the Linux kernel. The original code can
* be found @
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. The name of the company may not be used to endorse or promote
* products derived from this software without specific prior written
* permission.
*
* http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/
* files/head:/src/aarch64/
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
* THIS SOFTWARE IS PROVIDED BY ARM LTD ``AS IS'' AND ANY EXPRESS OR IMPLIED
* WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL ARM LTD BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
* TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
* LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
* NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/* Assumptions:
*
* ARMv8-a, AArch64, unaligned accesses.
*/
/* includes here */
#include <linux/linkage.h>
#include <asm/assembler.h>
/*
* compare memory areas(when two memory areas' offset are different,
* alignment handled by the hardware)
*
* Parameters:
* x0 - const memory area 1 pointer
* x1 - const memory area 2 pointer
* x2 - the maximal compare byte length
* Returns:
* x0 - a compare result, maybe less than, equal to, or greater than ZERO
*/
/* Parameters and result. */
src1 .req x0
src2 .req x1
limit .req x2
result .req x0
#define src1 x0
#define src2 x1
#define limit x2
#define result w0
/* Internal variables. */
data1 .req x3
data1w .req w3
data2 .req x4
data2w .req w4
has_nul .req x5
diff .req x6
endloop .req x7
tmp1 .req x8
tmp2 .req x9
tmp3 .req x10
pos .req x11
limit_wd .req x12
mask .req x13
#define data1 x3
#define data1w w3
#define data2 x4
#define data2w w4
#define tmp1 x5
/* Small inputs of less than 8 bytes are handled separately. This allows the
main code to be sped up using unaligned loads since there are now at least
8 bytes to be compared. If the first 8 bytes are equal, align src1.
This ensures each iteration does at most one unaligned access even if both
src1 and src2 are unaligned, and mutually aligned inputs behave as if
aligned. After the main loop, process the last 8 bytes using unaligned
accesses. */
.p2align 6
WEAK(memcmp)
cbz limit, .Lret0
eor tmp1, src1, src2
tst tmp1, #7
b.ne .Lmisaligned8
ands tmp1, src1, #7
b.ne .Lmutual_align
sub limit_wd, limit, #1 /* limit != 0, so no underflow. */
lsr limit_wd, limit_wd, #3 /* Convert to Dwords. */
/*
* The input source addresses are at alignment boundary.
* Directly compare eight bytes each time.
*/
.Lloop_aligned:
ldr data1, [src1], #8
ldr data2, [src2], #8
.Lstart_realigned:
subs limit_wd, limit_wd, #1
eor diff, data1, data2 /* Non-zero if differences found. */
csinv endloop, diff, xzr, cs /* Last Dword or differences. */
cbz endloop, .Lloop_aligned
subs limit, limit, 8
b.lo .Lless8
/* Not reached the limit, must have found a diff. */
tbz limit_wd, #63, .Lnot_limit
/* Limit >= 8, so check first 8 bytes using unaligned loads. */
ldr data1, [src1], 8
ldr data2, [src2], 8
and tmp1, src1, 7
add limit, limit, tmp1
cmp data1, data2
bne .Lreturn
/* Limit % 8 == 0 => the diff is in the last 8 bytes. */
ands limit, limit, #7
b.eq .Lnot_limit
/*
* The remained bytes less than 8. It is needed to extract valid data
* from last eight bytes of the intended memory range.
*/
lsl limit, limit, #3 /* bytes-> bits. */
mov mask, #~0
CPU_BE( lsr mask, mask, limit )
CPU_LE( lsl mask, mask, limit )
bic data1, data1, mask
bic data2, data2, mask
/* Align src1 and adjust src2 with bytes not yet done. */
sub src1, src1, tmp1
sub src2, src2, tmp1
orr diff, diff, mask
b .Lnot_limit
subs limit, limit, 8
b.ls .Llast_bytes
.Lmutual_align:
/*
* Sources are mutually aligned, but are not currently at an
* alignment boundary. Round down the addresses and then mask off
* the bytes that precede the start point.
*/
bic src1, src1, #7
bic src2, src2, #7
ldr data1, [src1], #8
ldr data2, [src2], #8
/*
* We can not add limit with alignment offset(tmp1) here. Since the
* addition probably make the limit overflown.
*/
sub limit_wd, limit, #1/*limit != 0, so no underflow.*/
and tmp3, limit_wd, #7
lsr limit_wd, limit_wd, #3
add tmp3, tmp3, tmp1
add limit_wd, limit_wd, tmp3, lsr #3
add limit, limit, tmp1/* Adjust the limit for the extra. */
/* Loop performing 8 bytes per iteration using aligned src1.
Limit is pre-decremented by 8 and must be larger than zero.
Exit if <= 8 bytes left to do or if the data is not equal. */
.p2align 4
.Lloop8:
ldr data1, [src1], 8
ldr data2, [src2], 8
subs limit, limit, 8
ccmp data1, data2, 0, hi /* NZCV = 0b0000. */
b.eq .Lloop8
lsl tmp1, tmp1, #3/* Bytes beyond alignment -> bits.*/
neg tmp1, tmp1/* Bits to alignment -64. */
mov tmp2, #~0
/*mask off the non-intended bytes before the start address.*/
CPU_BE( lsl tmp2, tmp2, tmp1 )/*Big-endian.Early bytes are at MSB*/
/* Little-endian. Early bytes are at LSB. */
CPU_LE( lsr tmp2, tmp2, tmp1 )
cmp data1, data2
bne .Lreturn
orr data1, data1, tmp2
orr data2, data2, tmp2
b .Lstart_realigned
/* Compare last 1-8 bytes using unaligned access. */
.Llast_bytes:
ldr data1, [src1, limit]
ldr data2, [src2, limit]
/*src1 and src2 have different alignment offset.*/
.Lmisaligned8:
cmp limit, #8
b.lo .Ltiny8proc /*limit < 8: compare byte by byte*/
/* Compare data bytes and set return value to 0, -1 or 1. */
.Lreturn:
#ifndef __AARCH64EB__
rev data1, data1
rev data2, data2
#endif
cmp data1, data2
.Lret_eq:
cset result, ne
cneg result, result, lo
ret
and tmp1, src1, #7
neg tmp1, tmp1
add tmp1, tmp1, #8/*valid length in the first 8 bytes of src1*/
and tmp2, src2, #7
neg tmp2, tmp2
add tmp2, tmp2, #8/*valid length in the first 8 bytes of src2*/
subs tmp3, tmp1, tmp2
csel pos, tmp1, tmp2, hi /*Choose the maximum.*/
sub limit, limit, pos
/*compare the proceeding bytes in the first 8 byte segment.*/
.Ltinycmp:
ldrb data1w, [src1], #1
ldrb data2w, [src2], #1
subs pos, pos, #1
ccmp data1w, data2w, #0, ne /* NZCV = 0b0000. */
b.eq .Ltinycmp
cbnz pos, 1f /*diff occurred before the last byte.*/
.p2align 4
/* Compare up to 8 bytes. Limit is [-8..-1]. */
.Lless8:
adds limit, limit, 4
b.lo .Lless4
ldr data1w, [src1], 4
ldr data2w, [src2], 4
cmp data1w, data2w
b.eq .Lstart_align
1:
sub result, data1, data2
ret
.Lstart_align:
lsr limit_wd, limit, #3
cbz limit_wd, .Lremain8
ands xzr, src1, #7
b.eq .Lrecal_offset
/*process more leading bytes to make src1 aligned...*/
add src1, src1, tmp3 /*backwards src1 to alignment boundary*/
add src2, src2, tmp3
sub limit, limit, tmp3
lsr limit_wd, limit, #3
cbz limit_wd, .Lremain8
/*load 8 bytes from aligned SRC1..*/
ldr data1, [src1], #8
ldr data2, [src2], #8
subs limit_wd, limit_wd, #1
eor diff, data1, data2 /*Non-zero if differences found.*/
csinv endloop, diff, xzr, ne
cbnz endloop, .Lunequal_proc
/*How far is the current SRC2 from the alignment boundary...*/
and tmp3, tmp3, #7
.Lrecal_offset:/*src1 is aligned now..*/
neg pos, tmp3
.Lloopcmp_proc:
/*
* Divide the eight bytes into two parts. First,backwards the src2
* to an alignment boundary,load eight bytes and compare from
* the SRC2 alignment boundary. If all 8 bytes are equal,then start
* the second part's comparison. Otherwise finish the comparison.
* This special handle can garantee all the accesses are in the
* thread/task space in avoid to overrange access.
*/
ldr data1, [src1,pos]
ldr data2, [src2,pos]
eor diff, data1, data2 /* Non-zero if differences found. */
cbnz diff, .Lnot_limit
/*The second part process*/
ldr data1, [src1], #8
ldr data2, [src2], #8
eor diff, data1, data2 /* Non-zero if differences found. */
subs limit_wd, limit_wd, #1
csinv endloop, diff, xzr, ne/*if limit_wd is 0,will finish the cmp*/
cbz endloop, .Lloopcmp_proc
.Lunequal_proc:
cbz diff, .Lremain8
/* There is difference occurred in the latest comparison. */
.Lnot_limit:
/*
* For little endian,reverse the low significant equal bits into MSB,then
* following CLZ can find how many equal bits exist.
*/
CPU_LE( rev diff, diff )
CPU_LE( rev data1, data1 )
CPU_LE( rev data2, data2 )
/*
* The MS-non-zero bit of DIFF marks either the first bit
* that is different, or the end of the significant data.
* Shifting left now will bring the critical information into the
* top bits.
*/
clz pos, diff
lsl data1, data1, pos
lsl data2, data2, pos
/*
* We need to zero-extend (char is unsigned) the value and then
* perform a signed subtraction.
*/
lsr data1, data1, #56
sub result, data1, data2, lsr #56
ret
.Lremain8:
/* Limit % 8 == 0 =>. all data are equal.*/
ands limit, limit, #7
b.eq .Lret0
.Ltiny8proc:
ldrb data1w, [src1], #1
ldrb data2w, [src2], #1
subs limit, limit, #1
ccmp data1w, data2w, #0, ne /* NZCV = 0b0000. */
b.eq .Ltiny8proc
sub result, data1, data2
ret
.Lret0:
mov result, #0
b.ne .Lreturn
sub limit, limit, 4
.Lless4:
adds limit, limit, 4
beq .Lret_eq
.Lbyte_loop:
ldrb data1w, [src1], 1
ldrb data2w, [src2], 1
subs limit, limit, 1
ccmp data1w, data2w, 0, ne /* NZCV = 0b0000. */
b.eq .Lbyte_loop
sub result, data1w, data2w
ret
ENDPIPROC(memcmp)

View File

@@ -60,6 +60,7 @@ tmp3 .req x9
zeroones .req x10
pos .req x11
.p2align 6
WEAK(strcmp)
eor tmp1, src1, src2
mov zeroones, #REP8_01

View File

@@ -10,8 +10,7 @@
#include "blk.h"
static struct bio *next_bio(struct bio *bio, unsigned int nr_pages,
gfp_t gfp)
struct bio *blk_next_bio(struct bio *bio, unsigned int nr_pages, gfp_t gfp)
{
struct bio *new = bio_alloc(gfp, nr_pages);
@@ -61,7 +60,7 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
WARN_ON_ONCE((req_sects << 9) > UINT_MAX);
bio = next_bio(bio, 0, gfp_mask);
bio = blk_next_bio(bio, 0, gfp_mask);
bio->bi_iter.bi_sector = sector;
bio_set_dev(bio, bdev);
bio_set_op_attrs(bio, op, 0);
@@ -155,7 +154,7 @@ static int __blkdev_issue_write_same(struct block_device *bdev, sector_t sector,
max_write_same_sectors = bio_allowed_max_sectors(q);
while (nr_sects) {
bio = next_bio(bio, 1, gfp_mask);
bio = blk_next_bio(bio, 1, gfp_mask);
bio->bi_iter.bi_sector = sector;
bio_set_dev(bio, bdev);
bio->bi_vcnt = 1;
@@ -231,7 +230,7 @@ static int __blkdev_issue_write_zeroes(struct block_device *bdev,
return -EOPNOTSUPP;
while (nr_sects) {
bio = next_bio(bio, 0, gfp_mask);
bio = blk_next_bio(bio, 0, gfp_mask);
bio->bi_iter.bi_sector = sector;
bio_set_dev(bio, bdev);
bio->bi_opf = REQ_OP_WRITE_ZEROES;
@@ -282,8 +281,8 @@ static int __blkdev_issue_zero_pages(struct block_device *bdev,
return -EPERM;
while (nr_sects != 0) {
bio = next_bio(bio, __blkdev_sectors_to_bio_pages(nr_sects),
gfp_mask);
bio = blk_next_bio(bio, __blkdev_sectors_to_bio_pages(nr_sects),
gfp_mask);
bio->bi_iter.bi_sector = sector;
bio_set_dev(bio, bdev);
bio_set_op_attrs(bio, REQ_OP_WRITE, 0);

View File

@@ -50,12 +50,12 @@ blk_mq_sched_allow_merge(struct request_queue *q, struct request *rq,
return true;
}
static inline void blk_mq_sched_completed_request(struct request *rq)
static inline void blk_mq_sched_completed_request(struct request *rq, u64 now)
{
struct elevator_queue *e = rq->q->elevator;
if (e && e->type->ops.mq.completed_request)
e->type->ops.mq.completed_request(rq);
e->type->ops.mq.completed_request(rq, now);
}
static inline void blk_mq_sched_started_request(struct request *rq)

View File

@@ -527,6 +527,9 @@ inline void __blk_mq_end_request(struct request *rq, blk_status_t error)
blk_stat_add(rq, now);
}
if (rq->internal_tag != -1)
blk_mq_sched_completed_request(rq, now);
blk_account_io_done(rq, now);
if (rq->end_io) {
@@ -563,8 +566,6 @@ static void __blk_mq_complete_request(struct request *rq)
if (!blk_mq_mark_complete(rq))
return;
if (rq->internal_tag != -1)
blk_mq_sched_completed_request(rq);
if (!test_bit(QUEUE_FLAG_SAME_COMP, &rq->q->queue_flags)) {
rq->q->softirq_done_fn(rq);

View File

@@ -300,6 +300,11 @@ static ssize_t queue_zoned_show(struct request_queue *q, char *page)
}
}
static ssize_t queue_nr_zones_show(struct request_queue *q, char *page)
{
return queue_var_show(blk_queue_nr_zones(q), page);
}
static ssize_t queue_nomerges_show(struct request_queue *q, char *page)
{
return queue_var_show((blk_queue_nomerges(q) << 1) |
@@ -637,6 +642,11 @@ static struct queue_sysfs_entry queue_zoned_entry = {
.show = queue_zoned_show,
};
static struct queue_sysfs_entry queue_nr_zones_entry = {
.attr = {.name = "nr_zones", .mode = 0444 },
.show = queue_nr_zones_show,
};
static struct queue_sysfs_entry queue_nomerges_entry = {
.attr = {.name = "nomerges", .mode = 0644 },
.show = queue_nomerges_show,
@@ -727,6 +737,7 @@ static struct attribute *default_attrs[] = {
&queue_write_zeroes_max_entry.attr,
&queue_nonrot_entry.attr,
&queue_zoned_entry.attr,
&queue_nr_zones_entry.attr,
&queue_nomerges_entry.attr,
&queue_rq_affinity_entry.attr,
&queue_iostats_entry.attr,

View File

@@ -13,6 +13,8 @@
#include <linux/rbtree.h>
#include <linux/blkdev.h>
#include "blk.h"
static inline sector_t blk_zone_start(struct request_queue *q,
sector_t sector)
{
@@ -63,6 +65,33 @@ void __blk_req_zone_write_unlock(struct request *rq)
}
EXPORT_SYMBOL_GPL(__blk_req_zone_write_unlock);
static inline unsigned int __blkdev_nr_zones(struct request_queue *q,
sector_t nr_sectors)
{
unsigned long zone_sectors = blk_queue_zone_sectors(q);
return (nr_sectors + zone_sectors - 1) >> ilog2(zone_sectors);
}
/**
* blkdev_nr_zones - Get number of zones
* @bdev: Target block device
*
* Description:
* Return the total number of zones of a zoned block device.
* For a regular block device, the number of zones is always 0.
*/
unsigned int blkdev_nr_zones(struct block_device *bdev)
{
struct request_queue *q = bdev_get_queue(bdev);
if (!blk_queue_is_zoned(q))
return 0;
return __blkdev_nr_zones(q, bdev->bd_part->nr_sects);
}
EXPORT_SYMBOL_GPL(blkdev_nr_zones);
/*
* Check that a zone report belongs to the partition.
* If yes, fix its start sector and write pointer, copy it in the
@@ -253,13 +282,13 @@ int blkdev_reset_zones(struct block_device *bdev,
struct bio *bio;
int ret;
if (!q)
return -ENXIO;
if (!blk_queue_is_zoned(q))
return -EOPNOTSUPP;
if (end_sector > bdev->bd_part->nr_sects)
if (bdev_read_only(bdev))
return -EPERM;
if (!nr_sectors || end_sector > bdev->bd_part->nr_sects)
/* Out of range */
return -EINVAL;
@@ -272,19 +301,14 @@ int blkdev_reset_zones(struct block_device *bdev,
end_sector != bdev->bd_part->nr_sects)
return -EINVAL;
blk_start_plug(&plug);
while (sector < end_sector) {
bio = bio_alloc(gfp_mask, 0);
bio = blk_next_bio(bio, 0, gfp_mask);
bio->bi_iter.bi_sector = sector;
bio_set_dev(bio, bdev);
bio_set_op_attrs(bio, REQ_OP_ZONE_RESET, 0);
ret = submit_bio_wait(bio);
bio_put(bio);
if (ret)
return ret;
sector += zone_sectors;
/* This may take a while, so be nice to others */
@@ -292,7 +316,12 @@ int blkdev_reset_zones(struct block_device *bdev,
}
return 0;
ret = submit_bio_wait(bio);
bio_put(bio);
blk_finish_plug(&plug);
return ret;
}
EXPORT_SYMBOL_GPL(blkdev_reset_zones);
@@ -325,8 +354,7 @@ int blkdev_report_zones_ioctl(struct block_device *bdev, fmode_t mode,
if (!rep.nr_zones)
return -EINVAL;
if (rep.nr_zones > INT_MAX / sizeof(struct blk_zone))
return -ERANGE;
rep.nr_zones = min(blkdev_nr_zones(bdev), rep.nr_zones);
zones = kvmalloc_array(rep.nr_zones, sizeof(struct blk_zone),
GFP_KERNEL | __GFP_ZERO);

View File

@@ -438,4 +438,6 @@ extern int blk_iolatency_init(struct request_queue *q);
static inline int blk_iolatency_init(struct request_queue *q) { return 0; }
#endif
struct bio *blk_next_bio(struct bio *bio, unsigned int nr_pages, gfp_t gfp);
#endif /* BLK_INTERNAL_H */

View File

@@ -537,6 +537,10 @@ int blkdev_ioctl(struct block_device *bdev, fmode_t mode, unsigned cmd,
return blkdev_report_zones_ioctl(bdev, mode, cmd, arg);
case BLKRESETZONE:
return blkdev_reset_zones_ioctl(bdev, mode, cmd, arg);
case BLKGETZONESZ:
return put_uint(arg, bdev_zone_sectors(bdev));
case BLKGETNRZONES:
return put_uint(arg, blkdev_nr_zones(bdev));
case HDIO_GETGEO:
return blkdev_getgeo(bdev, argp);
case BLKRAGET:

View File

@@ -29,19 +29,30 @@
#include "blk-mq-debugfs.h"
#include "blk-mq-sched.h"
#include "blk-mq-tag.h"
#include "blk-stat.h"
/* Scheduling domains. */
#define CREATE_TRACE_POINTS
#include <trace/events/kyber.h>
/*
* Scheduling domains: the device is divided into multiple domains based on the
* request type.
*/
enum {
KYBER_READ,
KYBER_SYNC_WRITE,
KYBER_OTHER, /* Async writes, discard, etc. */
KYBER_WRITE,
KYBER_DISCARD,
KYBER_OTHER,
KYBER_NUM_DOMAINS,
};
enum {
KYBER_MIN_DEPTH = 256,
static const char *kyber_domain_names[] = {
[KYBER_READ] = "READ",
[KYBER_WRITE] = "WRITE",
[KYBER_DISCARD] = "DISCARD",
[KYBER_OTHER] = "OTHER",
};
enum {
/*
* In order to prevent starvation of synchronous requests by a flood of
* asynchronous requests, we reserve 25% of requests for synchronous
@@ -51,25 +62,87 @@ enum {
};
/*
* Initial device-wide depths for each scheduling domain.
* Maximum device-wide depth for each scheduling domain.
*
* Even for fast devices with lots of tags like NVMe, you can saturate
* the device with only a fraction of the maximum possible queue depth.
* So, we cap these to a reasonable value.
* Even for fast devices with lots of tags like NVMe, you can saturate the
* device with only a fraction of the maximum possible queue depth. So, we cap
* these to a reasonable value.
*/
static const unsigned int kyber_depth[] = {
[KYBER_READ] = 256,
[KYBER_SYNC_WRITE] = 128,
[KYBER_OTHER] = 64,
[KYBER_WRITE] = 128,
[KYBER_DISCARD] = 64,
[KYBER_OTHER] = 16,
};
/*
* Scheduling domain batch sizes. We favor reads.
* Default latency targets for each scheduling domain.
*/
static const u64 kyber_latency_targets[] = {
[KYBER_READ] = 2ULL * NSEC_PER_MSEC,
[KYBER_WRITE] = 10ULL * NSEC_PER_MSEC,
[KYBER_DISCARD] = 5ULL * NSEC_PER_SEC,
};
/*
* Batch size (number of requests we'll dispatch in a row) for each scheduling
* domain.
*/
static const unsigned int kyber_batch_size[] = {
[KYBER_READ] = 16,
[KYBER_SYNC_WRITE] = 8,
[KYBER_OTHER] = 8,
[KYBER_WRITE] = 8,
[KYBER_DISCARD] = 1,
[KYBER_OTHER] = 1,
};
/*
* Requests latencies are recorded in a histogram with buckets defined relative
* to the target latency:
*
* <= 1/4 * target latency
* <= 1/2 * target latency
* <= 3/4 * target latency
* <= target latency
* <= 1 1/4 * target latency
* <= 1 1/2 * target latency
* <= 1 3/4 * target latency
* > 1 3/4 * target latency
*/
enum {
/*
* The width of the latency histogram buckets is
* 1 / (1 << KYBER_LATENCY_SHIFT) * target latency.
*/
KYBER_LATENCY_SHIFT = 2,
/*
* The first (1 << KYBER_LATENCY_SHIFT) buckets are <= target latency,
* thus, "good".
*/
KYBER_GOOD_BUCKETS = 1 << KYBER_LATENCY_SHIFT,
/* There are also (1 << KYBER_LATENCY_SHIFT) "bad" buckets. */
KYBER_LATENCY_BUCKETS = 2 << KYBER_LATENCY_SHIFT,
};
/*
* We measure both the total latency and the I/O latency (i.e., latency after
* submitting to the device).
*/
enum {
KYBER_TOTAL_LATENCY,
KYBER_IO_LATENCY,
};
static const char *kyber_latency_type_names[] = {
[KYBER_TOTAL_LATENCY] = "total",
[KYBER_IO_LATENCY] = "I/O",
};
/*
* Per-cpu latency histograms: total latency and I/O latency for each scheduling
* domain except for KYBER_OTHER.
*/
struct kyber_cpu_latency {
atomic_t buckets[KYBER_OTHER][2][KYBER_LATENCY_BUCKETS];
};
/*
@@ -88,12 +161,9 @@ struct kyber_ctx_queue {
struct kyber_queue_data {
struct request_queue *q;
struct blk_stat_callback *cb;
/*
* The device is divided into multiple scheduling domains based on the
* request type. Each domain has a fixed number of in-flight requests of
* that type device-wide, limited by these tokens.
* Each scheduling domain has a limited number of in-flight requests
* device-wide, limited by these tokens.
*/
struct sbitmap_queue domain_tokens[KYBER_NUM_DOMAINS];
@@ -103,8 +173,19 @@ struct kyber_queue_data {
*/
unsigned int async_depth;
struct kyber_cpu_latency __percpu *cpu_latency;
/* Timer for stats aggregation and adjusting domain tokens. */
struct timer_list timer;
unsigned int latency_buckets[KYBER_OTHER][2][KYBER_LATENCY_BUCKETS];
unsigned long latency_timeout[KYBER_OTHER];
int domain_p99[KYBER_OTHER];
/* Target latencies in nanoseconds. */
u64 read_lat_nsec, write_lat_nsec;
u64 latency_targets[KYBER_OTHER];
};
struct kyber_hctx_data {
@@ -124,233 +205,219 @@ static int kyber_domain_wake(wait_queue_entry_t *wait, unsigned mode, int flags,
static unsigned int kyber_sched_domain(unsigned int op)
{
if ((op & REQ_OP_MASK) == REQ_OP_READ)
switch (op & REQ_OP_MASK) {
case REQ_OP_READ:
return KYBER_READ;
else if ((op & REQ_OP_MASK) == REQ_OP_WRITE && op_is_sync(op))
return KYBER_SYNC_WRITE;
else
case REQ_OP_WRITE:
return KYBER_WRITE;
case REQ_OP_DISCARD:
return KYBER_DISCARD;
default:
return KYBER_OTHER;
}
}
enum {
NONE = 0,
GOOD = 1,
GREAT = 2,
BAD = -1,
AWFUL = -2,
};
#define IS_GOOD(status) ((status) > 0)
#define IS_BAD(status) ((status) < 0)
static int kyber_lat_status(struct blk_stat_callback *cb,
unsigned int sched_domain, u64 target)
static void flush_latency_buckets(struct kyber_queue_data *kqd,
struct kyber_cpu_latency *cpu_latency,
unsigned int sched_domain, unsigned int type)
{
u64 latency;
unsigned int *buckets = kqd->latency_buckets[sched_domain][type];
atomic_t *cpu_buckets = cpu_latency->buckets[sched_domain][type];
unsigned int bucket;
if (!cb->stat[sched_domain].nr_samples)
return NONE;
latency = cb->stat[sched_domain].mean;
if (latency >= 2 * target)
return AWFUL;
else if (latency > target)
return BAD;
else if (latency <= target / 2)
return GREAT;
else /* (latency <= target) */
return GOOD;
for (bucket = 0; bucket < KYBER_LATENCY_BUCKETS; bucket++)
buckets[bucket] += atomic_xchg(&cpu_buckets[bucket], 0);
}
/*
* Adjust the read or synchronous write depth given the status of reads and
* writes. The goal is that the latencies of the two domains are fair (i.e., if
* one is good, then the other is good).
* Calculate the histogram bucket with the given percentile rank, or -1 if there
* aren't enough samples yet.
*/
static void kyber_adjust_rw_depth(struct kyber_queue_data *kqd,
unsigned int sched_domain, int this_status,
int other_status)
static int calculate_percentile(struct kyber_queue_data *kqd,
unsigned int sched_domain, unsigned int type,
unsigned int percentile)
{
unsigned int orig_depth, depth;
unsigned int *buckets = kqd->latency_buckets[sched_domain][type];
unsigned int bucket, samples = 0, percentile_samples;
for (bucket = 0; bucket < KYBER_LATENCY_BUCKETS; bucket++)
samples += buckets[bucket];
if (!samples)
return -1;
/*
* If this domain had no samples, or reads and writes are both good or
* both bad, don't adjust the depth.
* We do the calculation once we have 500 samples or one second passes
* since the first sample was recorded, whichever comes first.
*/
if (this_status == NONE ||
(IS_GOOD(this_status) && IS_GOOD(other_status)) ||
(IS_BAD(this_status) && IS_BAD(other_status)))
return;
orig_depth = depth = kqd->domain_tokens[sched_domain].sb.depth;
if (other_status == NONE) {
depth++;
} else {
switch (this_status) {
case GOOD:
if (other_status == AWFUL)
depth -= max(depth / 4, 1U);
else
depth -= max(depth / 8, 1U);
break;
case GREAT:
if (other_status == AWFUL)
depth /= 2;
else
depth -= max(depth / 4, 1U);
break;
case BAD:
depth++;
break;
case AWFUL:
if (other_status == GREAT)
depth += 2;
else
depth++;
break;
}
if (!kqd->latency_timeout[sched_domain])
kqd->latency_timeout[sched_domain] = max(jiffies + HZ, 1UL);
if (samples < 500 &&
time_is_after_jiffies(kqd->latency_timeout[sched_domain])) {
return -1;
}
kqd->latency_timeout[sched_domain] = 0;
percentile_samples = DIV_ROUND_UP(samples * percentile, 100);
for (bucket = 0; bucket < KYBER_LATENCY_BUCKETS - 1; bucket++) {
if (buckets[bucket] >= percentile_samples)
break;
percentile_samples -= buckets[bucket];
}
memset(buckets, 0, sizeof(kqd->latency_buckets[sched_domain][type]));
trace_kyber_latency(kqd->q, kyber_domain_names[sched_domain],
kyber_latency_type_names[type], percentile,
bucket + 1, 1 << KYBER_LATENCY_SHIFT, samples);
return bucket;
}
static void kyber_resize_domain(struct kyber_queue_data *kqd,
unsigned int sched_domain, unsigned int depth)
{
depth = clamp(depth, 1U, kyber_depth[sched_domain]);
if (depth != orig_depth)
if (depth != kqd->domain_tokens[sched_domain].sb.depth) {
sbitmap_queue_resize(&kqd->domain_tokens[sched_domain], depth);
trace_kyber_adjust(kqd->q, kyber_domain_names[sched_domain],
depth);
}
}
/*
* Adjust the depth of other requests given the status of reads and synchronous
* writes. As long as either domain is doing fine, we don't throttle, but if
* both domains are doing badly, we throttle heavily.
*/
static void kyber_adjust_other_depth(struct kyber_queue_data *kqd,
int read_status, int write_status,
bool have_samples)
static void kyber_timer_fn(struct timer_list *t)
{
unsigned int orig_depth, depth;
int status;
struct kyber_queue_data *kqd = from_timer(kqd, t, timer);
unsigned int sched_domain;
int cpu;
bool bad = false;
orig_depth = depth = kqd->domain_tokens[KYBER_OTHER].sb.depth;
/* Sum all of the per-cpu latency histograms. */
for_each_online_cpu(cpu) {
struct kyber_cpu_latency *cpu_latency;
if (read_status == NONE && write_status == NONE) {
depth += 2;
} else if (have_samples) {
if (read_status == NONE)
status = write_status;
else if (write_status == NONE)
status = read_status;
else
status = max(read_status, write_status);
switch (status) {
case GREAT:
depth += 2;
break;
case GOOD:
depth++;
break;
case BAD:
depth -= max(depth / 4, 1U);
break;
case AWFUL:
depth /= 2;
break;
cpu_latency = per_cpu_ptr(kqd->cpu_latency, cpu);
for (sched_domain = 0; sched_domain < KYBER_OTHER; sched_domain++) {
flush_latency_buckets(kqd, cpu_latency, sched_domain,
KYBER_TOTAL_LATENCY);
flush_latency_buckets(kqd, cpu_latency, sched_domain,
KYBER_IO_LATENCY);
}
}
depth = clamp(depth, 1U, kyber_depth[KYBER_OTHER]);
if (depth != orig_depth)
sbitmap_queue_resize(&kqd->domain_tokens[KYBER_OTHER], depth);
}
/*
* Check if any domains have a high I/O latency, which might indicate
* congestion in the device. Note that we use the p90; we don't want to
* be too sensitive to outliers here.
*/
for (sched_domain = 0; sched_domain < KYBER_OTHER; sched_domain++) {
int p90;
/*
* Apply heuristics for limiting queue depths based on gathered latency
* statistics.
*/
static void kyber_stat_timer_fn(struct blk_stat_callback *cb)
{
struct kyber_queue_data *kqd = cb->data;
int read_status, write_status;
read_status = kyber_lat_status(cb, KYBER_READ, kqd->read_lat_nsec);
write_status = kyber_lat_status(cb, KYBER_SYNC_WRITE, kqd->write_lat_nsec);
kyber_adjust_rw_depth(kqd, KYBER_READ, read_status, write_status);
kyber_adjust_rw_depth(kqd, KYBER_SYNC_WRITE, write_status, read_status);
kyber_adjust_other_depth(kqd, read_status, write_status,
cb->stat[KYBER_OTHER].nr_samples != 0);
p90 = calculate_percentile(kqd, sched_domain, KYBER_IO_LATENCY,
90);
if (p90 >= KYBER_GOOD_BUCKETS)
bad = true;
}
/*
* Continue monitoring latencies if we aren't hitting the targets or
* we're still throttling other requests.
* Adjust the scheduling domain depths. If we determined that there was
* congestion, we throttle all domains with good latencies. Either way,
* we ease up on throttling domains with bad latencies.
*/
if (!blk_stat_is_active(kqd->cb) &&
((IS_BAD(read_status) || IS_BAD(write_status) ||
kqd->domain_tokens[KYBER_OTHER].sb.depth < kyber_depth[KYBER_OTHER])))
blk_stat_activate_msecs(kqd->cb, 100);
for (sched_domain = 0; sched_domain < KYBER_OTHER; sched_domain++) {
unsigned int orig_depth, depth;
int p99;
p99 = calculate_percentile(kqd, sched_domain,
KYBER_TOTAL_LATENCY, 99);
/*
* This is kind of subtle: different domains will not
* necessarily have enough samples to calculate the latency
* percentiles during the same window, so we have to remember
* the p99 for the next time we observe congestion; once we do,
* we don't want to throttle again until we get more data, so we
* reset it to -1.
*/
if (bad) {
if (p99 < 0)
p99 = kqd->domain_p99[sched_domain];
kqd->domain_p99[sched_domain] = -1;
} else if (p99 >= 0) {
kqd->domain_p99[sched_domain] = p99;
}
if (p99 < 0)
continue;
/*
* If this domain has bad latency, throttle less. Otherwise,
* throttle more iff we determined that there is congestion.
*
* The new depth is scaled linearly with the p99 latency vs the
* latency target. E.g., if the p99 is 3/4 of the target, then
* we throttle down to 3/4 of the current depth, and if the p99
* is 2x the target, then we double the depth.
*/
if (bad || p99 >= KYBER_GOOD_BUCKETS) {
orig_depth = kqd->domain_tokens[sched_domain].sb.depth;
depth = (orig_depth * (p99 + 1)) >> KYBER_LATENCY_SHIFT;
kyber_resize_domain(kqd, sched_domain, depth);
}
}
}
static unsigned int kyber_sched_tags_shift(struct kyber_queue_data *kqd)
static unsigned int kyber_sched_tags_shift(struct request_queue *q)
{
/*
* All of the hardware queues have the same depth, so we can just grab
* the shift of the first one.
*/
return kqd->q->queue_hw_ctx[0]->sched_tags->bitmap_tags.sb.shift;
}
static int kyber_bucket_fn(const struct request *rq)
{
return kyber_sched_domain(rq->cmd_flags);
return q->queue_hw_ctx[0]->sched_tags->bitmap_tags.sb.shift;
}
static struct kyber_queue_data *kyber_queue_data_alloc(struct request_queue *q)
{
struct kyber_queue_data *kqd;
unsigned int max_tokens;
unsigned int shift;
int ret = -ENOMEM;
int i;
kqd = kmalloc_node(sizeof(*kqd), GFP_KERNEL, q->node);
kqd = kzalloc_node(sizeof(*kqd), GFP_KERNEL, q->node);
if (!kqd)
goto err;
kqd->q = q;
kqd->cb = blk_stat_alloc_callback(kyber_stat_timer_fn, kyber_bucket_fn,
KYBER_NUM_DOMAINS, kqd);
if (!kqd->cb)
kqd->cpu_latency = alloc_percpu_gfp(struct kyber_cpu_latency,
GFP_KERNEL | __GFP_ZERO);
if (!kqd->cpu_latency)
goto err_kqd;
/*
* The maximum number of tokens for any scheduling domain is at least
* the queue depth of a single hardware queue. If the hardware doesn't
* have many tags, still provide a reasonable number.
*/
max_tokens = max_t(unsigned int, q->tag_set->queue_depth,
KYBER_MIN_DEPTH);
timer_setup(&kqd->timer, kyber_timer_fn, 0);
for (i = 0; i < KYBER_NUM_DOMAINS; i++) {
WARN_ON(!kyber_depth[i]);
WARN_ON(!kyber_batch_size[i]);
ret = sbitmap_queue_init_node(&kqd->domain_tokens[i],
max_tokens, -1, false, GFP_KERNEL,
q->node);
kyber_depth[i], -1, false,
GFP_KERNEL, q->node);
if (ret) {
while (--i >= 0)
sbitmap_queue_free(&kqd->domain_tokens[i]);
goto err_cb;
goto err_buckets;
}
sbitmap_queue_resize(&kqd->domain_tokens[i], kyber_depth[i]);
}
shift = kyber_sched_tags_shift(kqd);
kqd->async_depth = (1U << shift) * KYBER_ASYNC_PERCENT / 100U;
for (i = 0; i < KYBER_OTHER; i++) {
kqd->domain_p99[i] = -1;
kqd->latency_targets[i] = kyber_latency_targets[i];
}
kqd->read_lat_nsec = 2000000ULL;
kqd->write_lat_nsec = 10000000ULL;
shift = kyber_sched_tags_shift(q);
kqd->async_depth = (1U << shift) * KYBER_ASYNC_PERCENT / 100U;
return kqd;
err_cb:
blk_stat_free_callback(kqd->cb);
err_buckets:
free_percpu(kqd->cpu_latency);
err_kqd:
kfree(kqd);
err:
@@ -372,25 +439,24 @@ static int kyber_init_sched(struct request_queue *q, struct elevator_type *e)
return PTR_ERR(kqd);
}
blk_stat_enable_accounting(q);
eq->elevator_data = kqd;
q->elevator = eq;
blk_stat_add_callback(q, kqd->cb);
return 0;
}
static void kyber_exit_sched(struct elevator_queue *e)
{
struct kyber_queue_data *kqd = e->elevator_data;
struct request_queue *q = kqd->q;
int i;
blk_stat_remove_callback(q, kqd->cb);
del_timer_sync(&kqd->timer);
for (i = 0; i < KYBER_NUM_DOMAINS; i++)
sbitmap_queue_free(&kqd->domain_tokens[i]);
blk_stat_free_callback(kqd->cb);
free_percpu(kqd->cpu_latency);
kfree(kqd);
}
@@ -558,41 +624,44 @@ static void kyber_finish_request(struct request *rq)
rq_clear_domain_token(kqd, rq);
}
static void kyber_completed_request(struct request *rq)
static void add_latency_sample(struct kyber_cpu_latency *cpu_latency,
unsigned int sched_domain, unsigned int type,
u64 target, u64 latency)
{
struct request_queue *q = rq->q;
struct kyber_queue_data *kqd = q->elevator->elevator_data;
unsigned int sched_domain;
u64 now, latency, target;
unsigned int bucket;
u64 divisor;
/*
* Check if this request met our latency goal. If not, quickly gather
* some statistics and start throttling.
*/
sched_domain = kyber_sched_domain(rq->cmd_flags);
switch (sched_domain) {
case KYBER_READ:
target = kqd->read_lat_nsec;
break;
case KYBER_SYNC_WRITE:
target = kqd->write_lat_nsec;
break;
default:
return;
if (latency > 0) {
divisor = max_t(u64, target >> KYBER_LATENCY_SHIFT, 1);
bucket = min_t(unsigned int, div64_u64(latency - 1, divisor),
KYBER_LATENCY_BUCKETS - 1);
} else {
bucket = 0;
}
/* If we are already monitoring latencies, don't check again. */
if (blk_stat_is_active(kqd->cb))
atomic_inc(&cpu_latency->buckets[sched_domain][type][bucket]);
}
static void kyber_completed_request(struct request *rq, u64 now)
{
struct kyber_queue_data *kqd = rq->q->elevator->elevator_data;
struct kyber_cpu_latency *cpu_latency;
unsigned int sched_domain;
u64 target;
sched_domain = kyber_sched_domain(rq->cmd_flags);
if (sched_domain == KYBER_OTHER)
return;
now = ktime_get_ns();
if (now < rq->io_start_time_ns)
return;
cpu_latency = get_cpu_ptr(kqd->cpu_latency);
target = kqd->latency_targets[sched_domain];
add_latency_sample(cpu_latency, sched_domain, KYBER_TOTAL_LATENCY,
target, now - rq->start_time_ns);
add_latency_sample(cpu_latency, sched_domain, KYBER_IO_LATENCY, target,
now - rq->io_start_time_ns);
put_cpu_ptr(kqd->cpu_latency);
latency = now - rq->io_start_time_ns;
if (latency > target)
blk_stat_activate_msecs(kqd->cb, 10);
timer_reduce(&kqd->timer, jiffies + HZ / 10);
}
struct flush_kcq_data {
@@ -713,6 +782,9 @@ kyber_dispatch_cur_domain(struct kyber_queue_data *kqd,
rq_set_domain_token(rq, nr);
list_del_init(&rq->queuelist);
return rq;
} else {
trace_kyber_throttled(kqd->q,
kyber_domain_names[khd->cur_domain]);
}
} else if (sbitmap_any_bit_set(&khd->kcq_map[khd->cur_domain])) {
nr = kyber_get_domain_token(kqd, khd, hctx);
@@ -723,6 +795,9 @@ kyber_dispatch_cur_domain(struct kyber_queue_data *kqd,
rq_set_domain_token(rq, nr);
list_del_init(&rq->queuelist);
return rq;
} else {
trace_kyber_throttled(kqd->q,
kyber_domain_names[khd->cur_domain]);
}
}
@@ -790,17 +865,17 @@ static bool kyber_has_work(struct blk_mq_hw_ctx *hctx)
return false;
}
#define KYBER_LAT_SHOW_STORE(op) \
static ssize_t kyber_##op##_lat_show(struct elevator_queue *e, \
char *page) \
#define KYBER_LAT_SHOW_STORE(domain, name) \
static ssize_t kyber_##name##_lat_show(struct elevator_queue *e, \
char *page) \
{ \
struct kyber_queue_data *kqd = e->elevator_data; \
\
return sprintf(page, "%llu\n", kqd->op##_lat_nsec); \
return sprintf(page, "%llu\n", kqd->latency_targets[domain]); \
} \
\
static ssize_t kyber_##op##_lat_store(struct elevator_queue *e, \
const char *page, size_t count) \
static ssize_t kyber_##name##_lat_store(struct elevator_queue *e, \
const char *page, size_t count) \
{ \
struct kyber_queue_data *kqd = e->elevator_data; \
unsigned long long nsec; \
@@ -810,12 +885,12 @@ static ssize_t kyber_##op##_lat_store(struct elevator_queue *e, \
if (ret) \
return ret; \
\
kqd->op##_lat_nsec = nsec; \
kqd->latency_targets[domain] = nsec; \
\
return count; \
}
KYBER_LAT_SHOW_STORE(read);
KYBER_LAT_SHOW_STORE(write);
KYBER_LAT_SHOW_STORE(KYBER_READ, read);
KYBER_LAT_SHOW_STORE(KYBER_WRITE, write);
#undef KYBER_LAT_SHOW_STORE
#define KYBER_LAT_ATTR(op) __ATTR(op##_lat_nsec, 0644, kyber_##op##_lat_show, kyber_##op##_lat_store)
@@ -882,7 +957,8 @@ static int kyber_##name##_waiting_show(void *data, struct seq_file *m) \
return 0; \
}
KYBER_DEBUGFS_DOMAIN_ATTRS(KYBER_READ, read)
KYBER_DEBUGFS_DOMAIN_ATTRS(KYBER_SYNC_WRITE, sync_write)
KYBER_DEBUGFS_DOMAIN_ATTRS(KYBER_WRITE, write)
KYBER_DEBUGFS_DOMAIN_ATTRS(KYBER_DISCARD, discard)
KYBER_DEBUGFS_DOMAIN_ATTRS(KYBER_OTHER, other)
#undef KYBER_DEBUGFS_DOMAIN_ATTRS
@@ -900,20 +976,7 @@ static int kyber_cur_domain_show(void *data, struct seq_file *m)
struct blk_mq_hw_ctx *hctx = data;
struct kyber_hctx_data *khd = hctx->sched_data;
switch (khd->cur_domain) {
case KYBER_READ:
seq_puts(m, "READ\n");
break;
case KYBER_SYNC_WRITE:
seq_puts(m, "SYNC_WRITE\n");
break;
case KYBER_OTHER:
seq_puts(m, "OTHER\n");
break;
default:
seq_printf(m, "%u\n", khd->cur_domain);
break;
}
seq_printf(m, "%s\n", kyber_domain_names[khd->cur_domain]);
return 0;
}
@@ -930,7 +993,8 @@ static int kyber_batching_show(void *data, struct seq_file *m)
{#name "_tokens", 0400, kyber_##name##_tokens_show}
static const struct blk_mq_debugfs_attr kyber_queue_debugfs_attrs[] = {
KYBER_QUEUE_DOMAIN_ATTRS(read),
KYBER_QUEUE_DOMAIN_ATTRS(sync_write),
KYBER_QUEUE_DOMAIN_ATTRS(write),
KYBER_QUEUE_DOMAIN_ATTRS(discard),
KYBER_QUEUE_DOMAIN_ATTRS(other),
{"async_depth", 0400, kyber_async_depth_show},
{},
@@ -942,7 +1006,8 @@ static const struct blk_mq_debugfs_attr kyber_queue_debugfs_attrs[] = {
{#name "_waiting", 0400, kyber_##name##_waiting_show}
static const struct blk_mq_debugfs_attr kyber_hctx_debugfs_attrs[] = {
KYBER_HCTX_DOMAIN_ATTRS(read),
KYBER_HCTX_DOMAIN_ATTRS(sync_write),
KYBER_HCTX_DOMAIN_ATTRS(write),
KYBER_HCTX_DOMAIN_ATTRS(discard),
KYBER_HCTX_DOMAIN_ATTRS(other),
{"cur_domain", 0400, kyber_cur_domain_show},
{"batching", 0400, kyber_batching_show},

View File

@@ -132,8 +132,7 @@ enum {
BINDER_DEBUG_PRIORITY_CAP = 1U << 13,
BINDER_DEBUG_SPINLOCKS = 1U << 14,
};
static uint32_t binder_debug_mask = BINDER_DEBUG_USER_ERROR |
BINDER_DEBUG_FAILED_TRANSACTION | BINDER_DEBUG_DEAD_TRANSACTION;
static uint32_t binder_debug_mask = 0;
module_param_named(debug_mask, binder_debug_mask, uint, 0644);
char *binder_devices_param = CONFIG_ANDROID_BINDER_DEVICES;
@@ -155,6 +154,7 @@ static int binder_set_stop_on_user_error(const char *val,
module_param_call(stop_on_user_error, binder_set_stop_on_user_error,
param_get_int, &binder_stop_on_user_error, 0644);
#ifdef DEBUG
#define binder_debug(mask, x...) \
do { \
if (binder_debug_mask & mask) \
@@ -168,6 +168,16 @@ module_param_call(stop_on_user_error, binder_set_stop_on_user_error,
if (binder_stop_on_user_error) \
binder_stop_on_user_error = 2; \
} while (0)
#else
static inline void binder_debug(uint32_t mask, const char *fmt, ...)
{
}
static inline void binder_user_error(const char *fmt, ...)
{
if (binder_stop_on_user_error)
binder_stop_on_user_error = 2;
}
#endif
#define to_flat_binder_object(hdr) \
container_of(hdr, struct flat_binder_object, hdr)

View File

@@ -2234,7 +2234,7 @@ static int cpufreq_set_policy(struct cpufreq_policy *policy,
* because new_policy is a copy of policy with one field updated.
*/
if (new_policy->min > new_policy->max)
return -EINVAL;
new_policy->min = new_policy->max;
/* verify the cpu speed can be set within this limit */
ret = cpufreq_driver->verify(new_policy);

View File

@@ -198,7 +198,7 @@ int cpuidle_enter_s2idle(struct cpuidle_driver *drv, struct cpuidle_device *dev)
* @drv: cpuidle driver for this cpu
* @index: index into the states table in @drv of the state to enter
*/
int cpuidle_enter_state(struct cpuidle_device *dev, struct cpuidle_driver *drv,
int __nocfi cpuidle_enter_state(struct cpuidle_device *dev, struct cpuidle_driver *drv,
int index)
{
int entered_state;

View File

@@ -35,6 +35,48 @@
#include <mmu/mali_kbase_mmu.h>
#include <context/mali_kbase_context_internal.h>
#define to_kprcs(kobj) container_of(kobj, struct kbase_process, kobj)
static void kbase_kprcs_release(struct kobject *kobj)
{
// Nothing to release
}
static ssize_t total_gpu_mem_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
{
struct kbase_process *kprcs = to_kprcs(kobj);
if (WARN_ON(!kprcs))
return 0;
return sysfs_emit(buf, "%lu\n",
(unsigned long) kprcs->total_gpu_pages << PAGE_SHIFT);
}
static struct kobj_attribute total_gpu_mem_attr = __ATTR_RO(total_gpu_mem);
static ssize_t dma_buf_gpu_mem_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
{
struct kbase_process *kprcs = to_kprcs(kobj);
if (WARN_ON(!kprcs))
return 0;
return sysfs_emit(buf, "%lu\n",
(unsigned long) kprcs->dma_buf_pages << PAGE_SHIFT);
}
static struct kobj_attribute dma_buf_gpu_mem_attr = __ATTR_RO(dma_buf_gpu_mem);
static struct attribute *kprcs_attrs[] = {
&total_gpu_mem_attr.attr,
&dma_buf_gpu_mem_attr.attr,
NULL
};
ATTRIBUTE_GROUPS(kprcs);
static struct kobj_type kprcs_ktype = {
.release = kbase_kprcs_release,
.sysfs_ops = &kobj_sysfs_ops,
.default_groups = kprcs_groups,
};
/**
* find_process_node - Used to traverse the process rb_tree to find if
* process exists already in process rb_tree.
@@ -102,6 +144,11 @@ static int kbase_insert_kctx_to_process(struct kbase_context *kctx)
INIT_LIST_HEAD(&kprcs->kctx_list);
kprcs->dma_buf_root = RB_ROOT;
kprcs->total_gpu_pages = 0;
kprcs->dma_buf_pages = 0;
WARN_ON(kobject_init_and_add(
&kprcs->kobj, &kprcs_ktype,
kctx->kbdev->proc_sysfs_node,
"%d", tgid));
while (*new) {
struct kbase_process *prcs_node;
@@ -239,6 +286,8 @@ static void kbase_remove_kctx_from_process(struct kbase_context *kctx)
*/
WARN_ON(kprcs->total_gpu_pages);
WARN_ON(!RB_EMPTY_ROOT(&kprcs->dma_buf_root));
kobject_del(&kprcs->kobj);
kobject_put(&kprcs->kobj);
kfree(kprcs);
}
}

View File

@@ -4288,6 +4288,36 @@ static struct attribute *kbase_scheduling_attrs[] = {
NULL
};
static ssize_t total_gpu_mem_show(
struct device *dev,
struct device_attribute *attr,
char *const buf)
{
struct kbase_device *kbdev;
kbdev = to_kbase_device(dev);
if (!kbdev)
return -ENODEV;
return sysfs_emit(buf, "%lu\n",
(unsigned long) kbdev->total_gpu_pages << PAGE_SHIFT);
}
static DEVICE_ATTR_RO(total_gpu_mem);
static ssize_t dma_buf_gpu_mem_show(
struct device *dev,
struct device_attribute *attr,
char *const buf)
{
struct kbase_device *kbdev;
kbdev = to_kbase_device(dev);
if (!kbdev)
return -ENODEV;
return sysfs_emit(buf, "%lu\n",
(unsigned long) kbdev->dma_buf_pages << PAGE_SHIFT);
}
static DEVICE_ATTR_RO(dma_buf_gpu_mem);
static struct attribute *kbase_attrs[] = {
#ifdef CONFIG_MALI_DEBUG
&dev_attr_debug_command.attr,
@@ -4307,6 +4337,8 @@ static struct attribute *kbase_attrs[] = {
&dev_attr_lp_mem_pool_size.attr,
&dev_attr_lp_mem_pool_max_size.attr,
&dev_attr_js_ctx_scheduling_mode.attr,
&dev_attr_total_gpu_mem.attr,
&dev_attr_dma_buf_gpu_mem.attr,
NULL
};
@@ -4342,6 +4374,9 @@ int kbase_sysfs_init(struct kbase_device *kbdev)
}
}
kbdev->proc_sysfs_node = kobject_create_and_add("kprcs",
&kbdev->dev->kobj);
return err;
}
@@ -4349,6 +4384,8 @@ void kbase_sysfs_term(struct kbase_device *kbdev)
{
sysfs_remove_group(&kbdev->dev->kobj, &kbase_scheduling_attr_group);
sysfs_remove_group(&kbdev->dev->kobj, &kbase_attr_group);
kobject_del(kbdev->proc_sysfs_node);
kobject_put(kbdev->proc_sysfs_node);
put_device(kbdev->dev);
}

View File

@@ -652,6 +652,9 @@ struct kbase_devfreq_queue_info {
* @total_gpu_pages: Total gpu pages allocated across all the contexts
* of this process, it accounts for both native allocations
* and dma_buf imported allocations.
* @dma_buf_pages: Total dma_buf pages allocated across all the contexts
* of this process, native allocations can be accounted for
* by subtracting this from &total_gpu_pages.
* @kctx_list: List of kbase contexts created for the process.
* @kprcs_node: Node to a rb_tree, kbase_device will maintain a rb_tree
* based on key tgid, kprcs_node is the node link to
@@ -661,14 +664,19 @@ struct kbase_devfreq_queue_info {
* Used to ensure that pages of allocation are accounted
* only once for the process, even if the allocation gets
* imported multiple times for the process.
* @kobj: Links to the per-process sysfs node
* &kbase_device.proc_sysfs_node.
*/
struct kbase_process {
pid_t tgid;
size_t total_gpu_pages;
size_t dma_buf_pages;
struct list_head kctx_list;
struct rb_node kprcs_node;
struct rb_root dma_buf_root;
struct kobject kobj;
};
/**
@@ -927,11 +935,13 @@ struct kbase_process {
* mapping and gpu memory usage at device level and
* other one at process level.
* @total_gpu_pages: Total GPU pages used for the complete GPU device.
* @dma_buf_pages: Total dma_buf pages used for GPU platform device.
* @dma_buf_lock: This mutex should be held while accounting for
* @total_gpu_pages from imported dma buffers.
* @gpu_mem_usage_lock: This spinlock should be held while accounting
* @total_gpu_pages for both native and dma-buf imported
* allocations.
* @proc_sysfs_node: Sysfs directory node to store per-process stats.
*/
struct kbase_device {
u32 hw_quirks_sc;
@@ -1173,6 +1183,7 @@ struct kbase_device {
struct rb_root dma_buf_root;
size_t total_gpu_pages;
size_t dma_buf_pages;
struct mutex dma_buf_lock;
spinlock_t gpu_mem_usage_lock;
@@ -1192,6 +1203,8 @@ struct kbase_device {
struct job_status_qos job_status_addr;
struct v1_data* v1;
#endif
struct kobject *proc_sysfs_node;
};
/**

View File

@@ -91,4 +91,3 @@ ifeq (, $(findstring $(CONFIG_MTK_PLATFORM), "mt6768" "mt6785"))
ccflags-y += -DCONFIG_MTK_GPU_DEBUG
ccflags-y += -DCONFIG_MTK_GPU_DEBUG_DFD
endif
ccflags-y += -DCONFIG_MTK_GPU_MEM_TRACK

View File

@@ -39,6 +39,48 @@
#include <mmu/mali_kbase_mmu.h>
#include <context/mali_kbase_context_internal.h>
#define to_kprcs(kobj) container_of(kobj, struct kbase_process, kobj)
static void kbase_kprcs_release(struct kobject *kobj)
{
// Nothing to release
}
static ssize_t total_gpu_mem_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
{
struct kbase_process *kprcs = to_kprcs(kobj);
if (WARN_ON(!kprcs))
return 0;
return sysfs_emit(buf, "%lu\n",
(unsigned long) kprcs->total_gpu_pages << PAGE_SHIFT);
}
static struct kobj_attribute total_gpu_mem_attr = __ATTR_RO(total_gpu_mem);
static ssize_t dma_buf_gpu_mem_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
{
struct kbase_process *kprcs = to_kprcs(kobj);
if (WARN_ON(!kprcs))
return 0;
return sysfs_emit(buf, "%lu\n",
(unsigned long) kprcs->dma_buf_pages << PAGE_SHIFT);
}
static struct kobj_attribute dma_buf_gpu_mem_attr = __ATTR_RO(dma_buf_gpu_mem);
static struct attribute *kprcs_attrs[] = {
&total_gpu_mem_attr.attr,
&dma_buf_gpu_mem_attr.attr,
NULL
};
ATTRIBUTE_GROUPS(kprcs);
static struct kobj_type kprcs_ktype = {
.release = kbase_kprcs_release,
.sysfs_ops = &kobj_sysfs_ops,
.default_groups = kprcs_groups,
};
/**
* find_process_node - Used to traverse the process rb_tree to find if
* process exists already in process rb_tree.
@@ -106,6 +148,11 @@ static int kbase_insert_kctx_to_process(struct kbase_context *kctx)
INIT_LIST_HEAD(&kprcs->kctx_list);
kprcs->dma_buf_root = RB_ROOT;
kprcs->total_gpu_pages = 0;
kprcs->dma_buf_pages = 0;
WARN_ON(kobject_init_and_add(
&kprcs->kobj, &kprcs_ktype,
kctx->kbdev->proc_sysfs_node,
"%d", tgid));
while (*new) {
struct kbase_process *prcs_node;
@@ -293,6 +340,8 @@ static void kbase_remove_kctx_from_process(struct kbase_context *kctx)
*/
WARN_ON(kprcs->total_gpu_pages);
WARN_ON(!RB_EMPTY_ROOT(&kprcs->dma_buf_root));
kobject_del(&kprcs->kobj);
kobject_put(&kprcs->kobj);
kfree(kprcs);
}
}

View File

@@ -2762,6 +2762,33 @@ static void process_prfcnt_interrupts(struct kbase_device *kbdev, u32 glb_req,
}
}
static void order_job_irq_clear_with_iface_mem_read(void)
{
/* Ensure that write to the JOB_IRQ_CLEAR is ordered with regards to the
* read from interface memory. The ordering is needed considering the way
* FW & Kbase writes to the JOB_IRQ_RAWSTAT and JOB_IRQ_CLEAR registers
* without any synchronization. Without the barrier there is no guarantee
* about the ordering, the write to IRQ_CLEAR can take effect after the read
* from interface memory and that could cause a problem for the scenario where
* FW sends back to back notifications for the same CSG for events like
* SYNC_UPDATE and IDLE, but Kbase gets a single IRQ and observes only the
* first event. Similar thing can happen with glb events like CFG_ALLOC_EN
* acknowledgment and GPU idle notification.
*
* MCU CPU
* --------------- ----------------
* Update interface memory Write to IRQ_CLEAR to clear current IRQ
* <barrier> <barrier>
* Write to IRQ_RAWSTAT to raise new IRQ Read interface memory
*/
#if KERNEL_VERSION(5, 10, 0) <= LINUX_VERSION_CODE
__iomb();
#else
/* CPU and GPU would be in the same Outer shareable domain */
dmb(osh);
#endif
}
void kbase_csf_interrupt(struct kbase_device *kbdev, u32 val)
{
unsigned long flags;
@@ -2771,6 +2798,7 @@ void kbase_csf_interrupt(struct kbase_device *kbdev, u32 val)
KBASE_KTRACE_ADD(kbdev, CSF_INTERRUPT, NULL, val);
kbase_reg_write(kbdev, JOB_CONTROL_REG(JOB_IRQ_CLEAR), val);
order_job_irq_clear_with_iface_mem_read();
if (val & JOB_IRQ_GLOBAL_IF) {
const struct kbase_csf_global_iface *const global_iface =

View File

@@ -539,6 +539,8 @@ struct kbase_csf_cpu_queue_context {
/**
* struct kbase_csf_heap_context_allocator - Allocator of heap contexts
*
* @heap_context_size_aligned: Size of a heap context structure, in bytes,
* aligned to GPU cacheline size.
* Heap context structures are allocated by the kernel for use by the firmware.
* The current implementation subdivides a single GPU memory region for use as
* a sparse array.
@@ -560,6 +562,7 @@ struct kbase_csf_heap_context_allocator {
u64 gpu_va;
struct mutex lock;
DECLARE_BITMAP(in_use, MAX_TILER_HEAPS);
u32 heap_context_size_aligned;
};
/**

View File

@@ -23,10 +23,7 @@
#include "mali_kbase_csf_heap_context_alloc.h"
/* Size of one heap context structure, in bytes. */
#define HEAP_CTX_SIZE ((size_t)32)
/* Total size of the GPU memory region allocated for heap contexts, in bytes. */
#define HEAP_CTX_REGION_SIZE (MAX_TILER_HEAPS * HEAP_CTX_SIZE)
#define HEAP_CTX_SIZE ((u32)32)
/**
* sub_alloc - Sub-allocate a heap context from a GPU memory region
@@ -38,8 +35,8 @@
static u64 sub_alloc(struct kbase_csf_heap_context_allocator *const ctx_alloc)
{
struct kbase_context *const kctx = ctx_alloc->kctx;
int heap_nr = 0;
size_t ctx_offset = 0;
unsigned long heap_nr = 0;
u32 ctx_offset = 0;
u64 heap_gpu_va = 0;
struct kbase_vmap_struct mapping;
void *ctx_ptr = NULL;
@@ -55,24 +52,24 @@ static u64 sub_alloc(struct kbase_csf_heap_context_allocator *const ctx_alloc)
return 0;
}
ctx_offset = heap_nr * HEAP_CTX_SIZE;
ctx_offset = heap_nr * ctx_alloc->heap_context_size_aligned;
heap_gpu_va = ctx_alloc->gpu_va + ctx_offset;
ctx_ptr = kbase_vmap_prot(kctx, heap_gpu_va,
HEAP_CTX_SIZE, KBASE_REG_CPU_WR, &mapping);
ctx_alloc->heap_context_size_aligned, KBASE_REG_CPU_WR, &mapping);
if (unlikely(!ctx_ptr)) {
dev_err(kctx->kbdev->dev,
"Failed to map tiler heap context %d (0x%llX)\n",
"Failed to map tiler heap context %lu (0x%llX)\n",
heap_nr, heap_gpu_va);
return 0;
}
memset(ctx_ptr, 0, HEAP_CTX_SIZE);
memset(ctx_ptr, 0, ctx_alloc->heap_context_size_aligned);
kbase_vunmap(ctx_ptr, &mapping);
bitmap_set(ctx_alloc->in_use, heap_nr, 1);
dev_dbg(kctx->kbdev->dev, "Allocated tiler heap context %d (0x%llX)\n",
dev_dbg(kctx->kbdev->dev, "Allocated tiler heap context %lu (0x%llX)\n",
heap_nr, heap_gpu_va);
return heap_gpu_va;
@@ -88,7 +85,7 @@ static void sub_free(struct kbase_csf_heap_context_allocator *const ctx_alloc,
u64 const heap_gpu_va)
{
struct kbase_context *const kctx = ctx_alloc->kctx;
u64 ctx_offset = 0;
u32 ctx_offset = 0;
unsigned int heap_nr = 0;
lockdep_assert_held(&ctx_alloc->lock);
@@ -99,15 +96,15 @@ static void sub_free(struct kbase_csf_heap_context_allocator *const ctx_alloc,
if (WARN_ON(heap_gpu_va < ctx_alloc->gpu_va))
return;
ctx_offset = heap_gpu_va - ctx_alloc->gpu_va;
ctx_offset = (u32)(heap_gpu_va - ctx_alloc->gpu_va);
if (WARN_ON(ctx_offset >= HEAP_CTX_REGION_SIZE) ||
WARN_ON(ctx_offset % HEAP_CTX_SIZE))
if (WARN_ON(ctx_offset >= (ctx_alloc->region->nr_pages << PAGE_SHIFT)) ||
WARN_ON(ctx_offset % ctx_alloc->heap_context_size_aligned))
return;
heap_nr = ctx_offset / HEAP_CTX_SIZE;
heap_nr = ctx_offset / ctx_alloc->heap_context_size_aligned;
dev_dbg(kctx->kbdev->dev,
"Freed tiler heap context %d (0x%llX)\n", heap_nr, heap_gpu_va);
"Freed tiler heap context %lu (0x%llX)\n", heap_nr, heap_gpu_va);
bitmap_clear(ctx_alloc->in_use, heap_nr, 1);
}
@@ -116,12 +113,17 @@ int kbase_csf_heap_context_allocator_init(
struct kbase_csf_heap_context_allocator *const ctx_alloc,
struct kbase_context *const kctx)
{
const u32 gpu_cache_line_size =
(1U << kctx->kbdev->gpu_props.props.l2_props.log2_line_size);
/* We cannot pre-allocate GPU memory here because the
* custom VA zone may not have been created yet.
*/
ctx_alloc->kctx = kctx;
ctx_alloc->region = NULL;
ctx_alloc->gpu_va = 0;
ctx_alloc->heap_context_size_aligned =
(HEAP_CTX_SIZE + gpu_cache_line_size - 1) & ~(gpu_cache_line_size - 1);
mutex_init(&ctx_alloc->lock);
bitmap_zero(ctx_alloc->in_use, MAX_TILER_HEAPS);
@@ -156,7 +158,7 @@ u64 kbase_csf_heap_context_allocator_alloc(
struct kbase_context *const kctx = ctx_alloc->kctx;
u64 flags = BASE_MEM_PROT_GPU_RD | BASE_MEM_PROT_GPU_WR |
BASE_MEM_PROT_CPU_WR | BASEP_MEM_NO_USER_FREE;
u64 nr_pages = PFN_UP(HEAP_CTX_REGION_SIZE);
u64 nr_pages = PFN_UP(MAX_TILER_HEAPS * ctx_alloc->heap_context_size_aligned);
u64 heap_gpu_va = 0;
#ifdef CONFIG_MALI_VECTOR_DUMP

View File

@@ -1488,8 +1488,8 @@ static int kbasep_cs_tiler_heap_init(struct kbase_context *kctx,
{
if (heap_init->in.group_id >= MEMORY_GROUP_MANAGER_NR_GROUPS)
return -EINVAL;
kctx->jit_group_id = heap_init->in.group_id;
else
kctx->jit_group_id = heap_init->in.group_id;
return kbase_csf_tiler_heap_init(kctx, heap_init->in.chunk_size,
heap_init->in.initial_chunks, heap_init->in.max_chunks,
@@ -5089,6 +5089,36 @@ static struct attribute *kbase_scheduling_attrs[] = {
NULL
};
static ssize_t total_gpu_mem_show(
struct device *dev,
struct device_attribute *attr,
char *const buf)
{
struct kbase_device *kbdev;
kbdev = to_kbase_device(dev);
if (!kbdev)
return -ENODEV;
return sysfs_emit(buf, "%lu\n",
(unsigned long) kbdev->total_gpu_pages << PAGE_SHIFT);
}
static DEVICE_ATTR_RO(total_gpu_mem);
static ssize_t dma_buf_gpu_mem_show(
struct device *dev,
struct device_attribute *attr,
char *const buf)
{
struct kbase_device *kbdev;
kbdev = to_kbase_device(dev);
if (!kbdev)
return -ENODEV;
return sysfs_emit(buf, "%lu\n",
(unsigned long) kbdev->dma_buf_pages << PAGE_SHIFT);
}
static DEVICE_ATTR_RO(dma_buf_gpu_mem);
static struct attribute *kbase_attrs[] = {
#ifdef CONFIG_MALI_DEBUG
&dev_attr_debug_command.attr,
@@ -5122,6 +5152,8 @@ static struct attribute *kbase_attrs[] = {
#if !MALI_USE_CSF
&dev_attr_js_ctx_scheduling_mode.attr,
#endif /* !MALI_USE_CSF */
&dev_attr_total_gpu_mem.attr,
&dev_attr_dma_buf_gpu_mem.attr,
NULL
};
@@ -5183,6 +5215,9 @@ int kbase_sysfs_init(struct kbase_device *kbdev)
&kbase_attr_group);
}
kbdev->proc_sysfs_node = kobject_create_and_add("kprcs",
&kbdev->dev->kobj);
return err;
}
@@ -5191,6 +5226,8 @@ void kbase_sysfs_term(struct kbase_device *kbdev)
sysfs_remove_group(&kbdev->dev->kobj, &kbase_mempool_attr_group);
sysfs_remove_group(&kbdev->dev->kobj, &kbase_scheduling_attr_group);
sysfs_remove_group(&kbdev->dev->kobj, &kbase_attr_group);
kobject_del(kbdev->proc_sysfs_node);
kobject_put(kbdev->proc_sysfs_node);
put_device(kbdev->dev);
}

View File

@@ -627,6 +627,9 @@ struct kbase_devfreq_queue_info {
* @total_gpu_pages: Total gpu pages allocated across all the contexts
* of this process, it accounts for both native allocations
* and dma_buf imported allocations.
* @dma_buf_pages: Total dma_buf pages allocated across all the contexts
* of this process, native allocations can be accounted for
* by subtracting this from &total_gpu_pages.
* @kctx_list: List of kbase contexts created for the process.
* @kprcs_node: Node to a rb_tree, kbase_device will maintain a rb_tree
* based on key tgid, kprcs_node is the node link to
@@ -636,14 +639,19 @@ struct kbase_devfreq_queue_info {
* Used to ensure that pages of allocation are accounted
* only once for the process, even if the allocation gets
* imported multiple times for the process.
* @kobj: Links to the per-process sysfs node
* &kbase_device.proc_sysfs_node.
*/
struct kbase_process {
pid_t tgid;
size_t total_gpu_pages;
size_t dma_buf_pages;
struct list_head kctx_list;
struct rb_node kprcs_node;
struct rb_root dma_buf_root;
struct kobject kobj;
};
/**
@@ -936,6 +944,7 @@ struct kbase_process {
* mapping and gpu memory usage at device level and
* other one at process level.
* @total_gpu_pages: Total GPU pages used for the complete GPU device.
* @dma_buf_pages: Total dma_buf pages used for GPU platform device.
* @dma_buf_lock: This mutex should be held while accounting for
* @total_gpu_pages from imported dma buffers.
* @gpu_mem_usage_lock: This spinlock should be held while accounting
@@ -953,6 +962,7 @@ struct kbase_process {
* @pcm_dev: The priority control manager device.
* @oom_notifier_block: notifier_block containing kernel-registered out-of-
* memory handler.
* @proc_sysfs_node: Sysfs directory node to store per-process stats.
*/
struct kbase_device {
u32 hw_quirks_sc;
@@ -1188,6 +1198,7 @@ struct kbase_device {
struct rb_root dma_buf_root;
size_t total_gpu_pages;
size_t dma_buf_pages;
struct mutex dma_buf_lock;
spinlock_t gpu_mem_usage_lock;
@@ -1214,6 +1225,7 @@ struct kbase_device {
struct job_status_qos job_status_addr;
struct v1_data* v1;
#endif
struct kobject *proc_sysfs_node;
};
/**

View File

@@ -1251,6 +1251,7 @@ int kbase_mem_init(struct kbase_device *kbdev)
spin_lock_init(&kbdev->gpu_mem_usage_lock);
kbdev->total_gpu_pages = 0;
kbdev->dma_buf_pages = 0;
kbdev->process_root = RB_ROOT;
kbdev->dma_buf_root = RB_ROOT;
mutex_init(&kbdev->dma_buf_lock);

View File

@@ -176,11 +176,15 @@ void kbase_remove_dma_buf_usage(struct kbase_context *kctx,
WARN_ON(dev_mapping_removed && !prcs_mapping_removed);
spin_lock(&kbdev->gpu_mem_usage_lock);
if (dev_mapping_removed)
kbdev->total_gpu_pages -= alloc->nents;
if (dev_mapping_removed) {
kbdev->total_gpu_pages -= alloc->nents;
kbdev->dma_buf_pages -= alloc->nents;
}
if (prcs_mapping_removed)
if (prcs_mapping_removed) {
kctx->kprcs->total_gpu_pages -= alloc->nents;
kctx->kprcs->dma_buf_pages -= alloc->nents;
}
if (dev_mapping_removed || prcs_mapping_removed)
kbase_trace_gpu_mem_usage(kbdev, kctx);
@@ -207,11 +211,15 @@ void kbase_add_dma_buf_usage(struct kbase_context *kctx,
WARN_ON(unique_dev_dmabuf && !unique_prcs_dmabuf);
spin_lock(&kbdev->gpu_mem_usage_lock);
if (unique_dev_dmabuf)
if (unique_dev_dmabuf) {
kbdev->total_gpu_pages += alloc->nents;
kbdev->dma_buf_pages += alloc->nents;
}
if (unique_prcs_dmabuf)
if (unique_prcs_dmabuf) {
kctx->kprcs->total_gpu_pages += alloc->nents;
kctx->kprcs->dma_buf_pages += alloc->nents;
}
if (unique_prcs_dmabuf || unique_dev_dmabuf)
kbase_trace_gpu_mem_usage(kbdev, kctx);

View File

@@ -22,9 +22,6 @@
#include <ged_dvfs.h>
#if IS_ENABLED(CONFIG_PROC_FS)
#include <linux/proc_fs.h>
#if IS_ENABLED(CONFIG_MTK_GPU_MEM_TRACK)
#include <device/mali_kbase_device.h>
#endif
#endif
static bool mfg_powered;
@@ -107,44 +104,6 @@ static int mtk_common_gpu_utilization_show(struct seq_file *m, void *v)
}
DEFINE_SHOW_ATTRIBUTE(mtk_common_gpu_utilization);
static int mtk_common_gpu_memory_show(struct seq_file *m, void *v)
{
#if IS_ENABLED(CONFIG_MTK_GPU_MEM_TRACK)
struct list_head *entry;
const struct list_head *kbdev_list;
kbdev_list = kbase_device_get_list();
list_for_each(entry, kbdev_list) {
struct kbase_device *kbdev = NULL;
struct kbase_context *kctx;
kbdev = list_entry(entry, struct kbase_device, entry);
/* output the total memory usage and cap for this device */
seq_printf(m, "%-16s %10u\n",
kbdev->devname,
atomic_read(&(kbdev->memdev.used_pages)));
mutex_lock(&kbdev->kctx_list_lock);
list_for_each_entry(kctx, &kbdev->kctx_list, kctx_list_link) {
/* output the memory usage and cap for each kctx
* opened on this device
*/
seq_printf(m, " %s-0x%p %10u %10u\n",
"kctx",
kctx,
atomic_read(&(kctx->used_pages)),
kctx->tgid);
}
mutex_unlock(&kbdev->kctx_list_lock);
}
kbase_device_put_list(kbdev_list);
#else
seq_puts(m, "GPU mem_profile doesn't be enabled\n");
#endif
return 0;
}
DEFINE_SHOW_ATTRIBUTE(mtk_common_gpu_memory);
void mtk_common_procfs_init(void)
{
mtk_mali_root = proc_mkdir("mtk_mali", NULL);
@@ -153,14 +112,12 @@ void mtk_common_procfs_init(void)
return;
}
proc_create("utilization", 0444, mtk_mali_root, &mtk_common_gpu_utilization_fops);
proc_create("gpu_memory", 0444, mtk_mali_root, &mtk_common_gpu_memory_fops);
}
void mtk_common_procfs_exit(void)
{
mtk_mali_root = NULL;
remove_proc_entry("utilization", mtk_mali_root);
remove_proc_entry("gpu_memory", mtk_mali_root);
remove_proc_entry("mtk_mali", NULL);
}
#endif

View File

@@ -725,8 +725,7 @@ static int dmz_get_zoned_device(struct dm_target *ti, char *path)
dev->zone_nr_blocks = dmz_sect2blk(dev->zone_nr_sectors);
dev->zone_nr_blocks_shift = ilog2(dev->zone_nr_blocks);
dev->nr_zones = (dev->capacity + dev->zone_nr_sectors - 1)
>> dev->zone_nr_sectors_shift;
dev->nr_zones = blkdev_nr_zones(dev->bdev);
dmz->dev = dev;

File diff suppressed because it is too large Load Diff

View File

@@ -1,93 +1,42 @@
#ifndef _AW8622_HAPTIC_H_
#define _AW8622_HAPTIC_H_
struct aw8622_effect_state {
int effect_idx;
int duration;
int secs;
unsigned long nsces;
bool is_shock_stop;
};
#include <linux/hrtimer.h>
#include <linux/mutex.h>
#include <linux/workqueue.h>
#include <linux/pinctrl/consumer.h>
struct waveform_data_info {
bool is_loaded;
const char *waveform_name;
unsigned int waveform_period; // The time of the whole waveform unit is ms
unsigned int sample_freq;
unsigned int sample_nums;
unsigned int us_time_len; //unit us
unsigned int max_sample_val;
unsigned int len;
unsigned char *data;
};
#define AW_GPIO_MODE_LED_DEFAULT (0)
#define HAPTIC_GPIO_AW8622_DEFAULT (0)
#define HAPTIC_GPIO_AW8622_SET (1)
#define HAPTIC_PWM_MEMORY_MODE_CLOCK (26000000)
#define HAPTIC_PWM_OLD_MODE_CLOCK (26000000)
#define DEFAULT_FREQUENCY (208)
#define MIN_FREQUENCY (203)
#define MAX_FREQUENCY (212)
struct aw8622_haptic {
/* Hardware info */
unsigned int pwm_ch;
struct device *dev;
int hwen_gpio;
struct pinctrl *ppinctrl_pwm;
unsigned int default_pwm_freq;
unsigned int h_l_period;
/* Vibration waveform data field */
struct delayed_work load_waveform_work;
struct delayed_work hw_off_work;
unsigned int wave_sample_period; //wave sample period is ns
struct waveform_data_info *p_waveform_data;
int waveform_data_nums;
unsigned int wave_max_len;
bool is_malloc_wavedata_info;
int cur_load_idx;
unsigned int load_idx_offset;
bool is_malloc_dma_memory;
dma_addr_t wave_phy;
void *wave_vir;
unsigned dma_len;
spinlock_t spin_lock;
/* Vibration control field */
bool is_actived;
bool is_real_play;
bool is_power_on;
bool is_wavefrom_ready;
bool is_hwen;
int effect_idx;
unsigned int duration;
unsigned int interval;
unsigned int center_freq;
struct workqueue_struct *aw8622_wq;
struct hrtimer timer;
struct work_struct play_work;
struct work_struct stop_play_work;
struct work_struct test_work;
unsigned int test_cnt;
struct delayed_work hw_off_work;
struct workqueue_struct *aw8622_wq;
struct mutex mutex_lock;
struct hrtimer timer;
struct aw8622_effect_state effect_state;
struct pinctrl *ppinctrl_pwm;
int hwen_gpio;
unsigned int pwm_ch;
unsigned int duration;
unsigned int frequency;
unsigned int center_freq;
unsigned int default_pwm_freq;
unsigned int wave_sample_period;
bool is_power_on;
bool is_actived;
bool is_hwen;
};
#define LONG_SHOCK_BIT_NUMS_PER_SAMPLED_VALE (80)
#define WAVEFORM_DATA_OFFSET (12)
#define BIT_NUMS_PER_SAMPLED_VALE (250)
#define BIT_NUMS_PER_BYTE (8)
#define WAVEFORM_MAX_SAMPLE_VAL (127)
#define WAVEFORM_MIN_SAMPLE_VAL (-127)
#define MAX_NUMS_NONNEGATIVE_SIGNEC_8BIT (128) //The number of non-negative integers that a signed 8bit of data can represent
#define MAX_NUMS_POSITIVE_SIGNEC_8BIT (128)
#define MAX_COUNT_SIGNEC_8BIT (255)
#endif
#endif /* _AW8622_HAPTIC_H_ */

View File

@@ -457,3 +457,10 @@ config MTK_CONNSYS_DEDICATED_LOG_PATH
from connsys (by EMI or other specific path)
2. The scope may include (but not limited) Wi-Fi firmware log,
BT firmware log, GPS firmware log etc.
config WLAN_DRV_BUILD_IN
bool "Build Wlan module in kernel"
help
This will build the wlan driver and the corresponding componenets
into the kernel.
If unsure say n

View File

@@ -12,97 +12,60 @@
#
# Connectivity combo driver
# If KERNELRELEASE is defined, we've been invoked from the
# kernel build system and can use its language.
ifneq ($(KERNELRELEASE),)
subdir-ccflags-y += -I$(srctree)/
subdir-ccflags-y += -I$(srctree)/drivers/misc/mediatek/base/power/include
subdir-ccflags-y += -I$(srctree)/drivers/misc/mediatek/clkbuf/src
subdir-ccflags-y += -I$(srctree)/drivers/misc/mediatek/base/power/include/clkbuf_v1/$(MTK_PLATFORM)
subdir-ccflags-y += -I$(srctree)/drivers/misc/mediatek/include/mt-plat/$(MTK_PLATFORM)/include
subdir-ccflags-y += -I$(srctree)/drivers/misc/mediatek/include/mt-plat
subdir-ccflags-y += -I$(srctree)/
subdir-ccflags-y += -I$(srctree)/drivers/misc/mediatek/base/power/include
subdir-ccflags-y += -I$(srctree)/drivers/misc/mediatek/clkbuf/src
subdir-ccflags-y += -I$(srctree)/drivers/misc/mediatek/base/power/include/clkbuf_v1/$(MTK_PLATFORM)
subdir-ccflags-y += -I$(srctree)/drivers/misc/mediatek/include/mt-plat/$(MTK_PLATFORM)/include
subdir-ccflags-y += -I$(srctree)/drivers/misc/mediatek/include/mt-plat
ifeq ($(CONFIG_MTK_PMIC_CHIP_MT6359),y)
subdir-ccflags-y += -I$(srctree)/drivers/misc/mediatek/pmic/include/mt6359
subdir-ccflags-y += -I$(srctree)/drivers/misc/mediatek/pmic/include/mt6359
endif
ifeq ($(CONFIG_MTK_PMIC_NEW_ARCH),y)
subdir-ccflags-y += -I$(srctree)/drivers/misc/mediatek/pmic/include
subdir-ccflags-y += -I$(srctree)/drivers/misc/mediatek/pmic/include
endif
subdir-ccflags-y += -I$(srctree)/drivers/mmc/core
subdir-ccflags-y += -I$(srctree)/drivers/misc/mediatek/eccci/$(MTK_PLATFORM)
subdir-ccflags-y += -I$(srctree)/drivers/misc/mediatek/eccci/
subdir-ccflags-y += -I$(srctree)/drivers/clk/mediatek/
subdir-ccflags-y += -I$(srctree)/drivers/pinctrl/mediatek/
subdir-ccflags-y += -I$(srctree)/drivers/misc/mediatek/power_throttling/
subdir-ccflags-y += -I$(srctree)/drivers/mmc/core
subdir-ccflags-y += -I$(srctree)/drivers/misc/mediatek/eccci/$(MTK_PLATFORM)
subdir-ccflags-y += -I$(srctree)/drivers/misc/mediatek/eccci/
subdir-ccflags-y += -I$(srctree)/drivers/clk/mediatek/
subdir-ccflags-y += -I$(srctree)/drivers/pinctrl/mediatek/
subdir-ccflags-y += -I$(srctree)/drivers/misc/mediatek/power_throttling/
# Do Nothing, move to standalone repo
MODULE_NAME := connadp
obj-$(CONFIG_MTK_COMBO) += $(MODULE_NAME).o
# Do Nothing, move to standalone repo
MODULE_NAME := connadp
obj-$(CONFIG_MTK_COMBO) += $(MODULE_NAME).o
$(MODULE_NAME)-objs += common/connectivity_build_in_adapter.o
$(MODULE_NAME)-objs += common/wmt_build_in_adapter.o
$(MODULE_NAME)-objs += power_throttling/adapter.o
$(MODULE_NAME)-objs += power_throttling/core.o
ifeq ($(CONFIG_CONN_PWR_DEBUG),y)
$(MODULE_NAME)-objs += power_throttling/test.o
endif
ifeq ($(CONFIG_MTK_COMBO), y)
ccflags-y += -D CFG_CONNADP_BUILD_IN
endif
# Do build-in for Makefile checking
# export CONFIG_WLAN_DRV_BUILD_IN=y
ifeq ($(CONFIG_WLAN_DRV_BUILD_IN),y)
PATH_TO_WMT_DRV = vendor/mediatek/kernel_modules/connectivity/common
PATH_TO_WLAN_CHR_DRV = vendor/mediatek/kernel_modules/connectivity/wlan/adaptor
PATH_TO_WLAN_DRV = vendor/mediatek/kernel_modules/connectivity/wlan/core/gen4m
ABS_PATH_TO_WMT_DRV = $(srctree)/../$(PATH_TO_WMT_DRV)
ABS_PATH_TO_WLAN_CHR_DRV = $(srctree)/../$(PATH_TO_WLAN_CHR_DRV)
ABS_PATH_TO_WLAN_DRV = $(srctree)/../$(PATH_TO_WLAN_DRV)
# check wlan driver folder
ifeq (,$(wildcard $(ABS_PATH_TO_WMT_DRV)))
$(error $(ABS_PATH_TO_WMT_DRV) is not existed)
endif
ifeq (,$(wildcard $(ABS_PATH_TO_WLAN_CHR_DRV)))
$(error $(ABS_PATH_TO_WLAN_CHR_DRV) is not existed)
endif
ifeq (,$(wildcard $(ABS_PATH_TO_WLAN_DRV)))
$(error $(ABS_PATH_TO_WLAN_DRV) is not existed)
endif
$(warning symbolic link to $(PATH_TO_WMT_DRV))
$(warning symbolic link to $(PATH_TO_WLAN_CHR_DRV))
$(warning symbolic link to $(PATH_TO_WLAN_DRV))
$(shell unlink $(srctree)/$(src)/wmt_drv)
$(shell unlink $(srctree)/$(src)/wmt_chrdev_wifi)
$(shell unlink $(srctree)/$(src)/wlan_drv_gen4m)
$(shell ln -s $(ABS_PATH_TO_WMT_DRV) $(srctree)/$(src)/wmt_drv)
$(shell ln -s $(ABS_PATH_TO_WLAN_CHR_DRV) $(srctree)/$(src)/wmt_chrdev_wifi)
$(shell ln -s $(ABS_PATH_TO_WLAN_DRV) $(srctree)/$(src)/wlan_drv_gen4m)
# for gen4m options
export CONFIG_MTK_COMBO_WIFI_HIF=axi
export MTK_COMBO_CHIP=CONNAC
export WLAN_CHIP_ID=6765
export MTK_ANDROID_WMT=y
# Do build-in for xxx.c checking
subdir-ccflags-y += -D MTK_WCN_REMOVE_KERNEL_MODULE
subdir-ccflags-y += -D MTK_WCN_BUILT_IN_DRIVER
obj-y += wmt_drv/
obj-y += wmt_chrdev_wifi/
obj-y += wlan_drv_gen4m/
endif
# Otherwise we were called directly from the command line;
# invoke the kernel build system.
else
KERNELDIR ?= /lib/modules/$(shell uname -r)/build
PWD := $(shell pwd)
default:
$(MAKE) -C $(KERNELDIR) M=$(PWD) modules
$(MODULE_NAME)-objs += common/connectivity_build_in_adapter.o
$(MODULE_NAME)-objs += common/wmt_build_in_adapter.o
$(MODULE_NAME)-objs += power_throttling/adapter.o
$(MODULE_NAME)-objs += power_throttling/core.o
ifeq ($(CONFIG_CONN_PWR_DEBUG),y)
$(MODULE_NAME)-objs += power_throttling/test.o
endif
ifeq ($(CONFIG_MTK_COMBO), y)
ccflags-y += -D CFG_CONNADP_BUILD_IN
endif
ifeq ($(CONFIG_WLAN_DRV_BUILD_IN),y)
# for gen4m options
export CONFIG_MTK_COMBO_WIFI_HIF=axi
export MTK_COMBO_CHIP=SOC2_2X2
export BT_PLATFORM=connac1x
export WLAN_CHIP_ID=6781
export MTK_ANDROID_WMT=y
export MTK_ANDROID_EMI=y
export WIFI_IP_SET=2
export WIFI_ECO_VER=1
export MTK_WLAN_SERVICE=yes
# Do build-in for xxx.c checking
subdir-ccflags-y += -D MTK_WCN_REMOVE_KERNEL_MODULE
subdir-ccflags-y += -D MTK_WCN_BUILT_IN_DRIVER
obj-$(CONFIG_MTK_COMBO) += common/
obj-$(CONFIG_MTK_COMBO_WIFI) += wlan/adaptor/
obj-$(CONFIG_MTK_COMBO_CHIP_CONSYS_6781) += wlan/core/gen4m/
obj-$(CONFIG_MTK_BTIF) += bt/mt66xx/wmt/
obj-$(CONFIG_MTK_COMBO_GPS) += gps/
obj-$(CONFIG_MTK_COMBO) += connfem/
endif

View File

@@ -0,0 +1,48 @@
###############################################################################
# Generally Android.mk can not get KConfig setting
# we can use this way to get
# include the final KConfig
# but there is no $(AUTO_CONF) at the first time (no out folder) when make
#
#ifneq (,$(wildcard $(AUTO_CONF)))
#include $(AUTO_CONF)
#include $(CLEAR_VARS)
#endif
###############################################################################
###############################################################################
# Generally Android.mk can not get KConfig setting #
# #
# do not have any KConfig checking in Android.mk #
# do not have any KConfig checking in Android.mk #
# do not have any KConfig checking in Android.mk #
# #
# e.g. ifeq ($(CONFIG_MTK_COMBO_WIFI), m) #
# xxxx #
# endif #
# #
# e.g. ifneq ($(filter "MT6632",$(CONFIG_MTK_COMBO_CHIP)),) #
# xxxx #
# endif #
# #
# All the KConfig checking should move to Makefile for each module #
# All the KConfig checking should move to Makefile for each module #
# All the KConfig checking should move to Makefile for each module #
# #
###############################################################################
###############################################################################
LOCAL_PATH := $(call my-dir)
ifeq ($(MTK_BT_SUPPORT),yes)
include $(CLEAR_VARS)
LOCAL_MODULE := btmtk_sdio_unify.ko
LOCAL_PROPRIETARY_MODULE := true
LOCAL_MODULE_OWNER := mtk
LOCAL_INIT_RC := init.btmtk_sdio.rc
LOCAL_SRC_FILES := $(patsubst $(LOCAL_PATH)/%,%,$(shell find $(LOCAL_PATH) -type f -name '*.[cho]')) Makefile
include $(MTK_KERNEL_MODULE)
endif

View File

@@ -0,0 +1,140 @@
export KERNEL_SRC := /lib/modules/$(shell uname -r)/build
#################### Configurations ####################
# Compile Options for bt driver configuration.
CONFIG_SUPPORT_BT_DL_WIFI_PATCH=y
CONFIG_SUPPORT_BT_DL_ZB_PATCH=y
CONFIG_SUPPORT_BLUEZ=n
CONFIG_SUPPORT_DVT=n
CONFIG_SUPPORT_HW_DVT=n
CONFIG_SUPPORT_MULTI_DEV_NODE=n
ifneq ($(TARGET_BUILD_VARIANT), user)
ccflags-y += -DBUILD_QA_DBG=1
else
ccflags-y += -DBUILD_QA_DBG=0
endif
ifeq ($(CONFIG_SUPPORT_BT_DL_WIFI_PATCH), y)
ccflags-y += -DCFG_SUPPORT_BT_DL_WIFI_PATCH=1
else
ccflags-y += -DCFG_SUPPORT_BT_DL_WIFI_PATCH=0
endif
ifeq ($(CONFIG_SUPPORT_BT_DL_ZB_PATCH), y)
ccflags-y += -DCFG_SUPPORT_BT_DL_ZB_PATCH=1
else
ccflags-y += -DCFG_SUPPORT_BT_DL_ZB_PATCH=0
endif
ifeq ($(CONFIG_SUPPORT_BLUEZ), y)
ccflags-y += -DCFG_SUPPORT_BLUEZ=1
else
ccflags-y += -DCFG_SUPPORT_BLUEZ=0
endif
ifeq ($(CONFIG_SUPPORT_HW_DVT), y)
ccflags-y += -DCFG_SUPPORT_HW_DVT=1
else
ccflags-y += -DCFG_SUPPORT_HW_DVT=0
endif
ifeq ($(SUPPORT_WAKEUP_IRQ), yes)
ccflags-y += -DCFG_SUPPORT_WAKEUP_IRQ
endif
ifeq ($(CONFIG_SUPPORT_DVT), y)
ccflags-y += -DCFG_SUPPORT_DVT=1
else
ccflags-y += -DCFG_SUPPORT_DVT=0
endif
ifeq ($(CONFIG_SUPPORT_MULTI_DEV_NODE), y)
ccflags-y += -DCFG_SUPPORT_MULTI_DEV_NODE=1
else
ccflags-y += -DCFG_SUPPORT_MULTI_DEV_NODE=0
endif
#################### Configurations ####################
# For chip interface, driver supports "usb", "sdio", "uart_tty", "uart_serdev" and "btif"
MTK_CHIP_IF := sdio
ifeq ($(MTK_CHIP_IF), sdio)
MOD_NAME = btmtk_sdio_unify
CFILES := sdio/btmtksdio.c btmtk_woble.c btmtk_buffer_mode.c btmtk_chip_reset.c
ccflags-y += -DCHIP_IF_SDIO
ccflags-y += -DSDIO_DEBUG=0
ccflags-y += -I$(src)/include/sdio
else ifeq ($(MTK_CHIP_IF), usb)
MOD_NAME = btmtk_usb_unify
CFILES := usb/btmtkusb.c btmtk_woble.c btmtk_chip_reset.c
ccflags-y += -DCHIP_IF_USB
ccflags-y += -I$(src)/include/usb
else ifeq ($(MTK_CHIP_IF), uart_tty)
MOD_NAME = btmtk_uart_unify
CFILES := uart/btmtktty.c btmtk_woble.c btmtk_chip_reset.c
ccflags-y += -DCHIP_IF_UART_TTY
ccflags-y += -I$(src)/include/uart/tty
else ifeq ($(MTK_CHIP_IF), uart_serdev)
MOD_NAME = btmtk_uart_unify
ccflags-y += -DCHIP_IF_UART_SERDEV
CFILES := uart/btmtkserdev.c
ccflags-y += -I$(src)/include/uart/serdev
else
MOD_NAME = btmtkbtif_unify
CFILES := btif/btmtk_btif.c
ccflags-y += -DCHIP_IF_BTIF
ccflags-y += -I$(src)/include/btif
endif
CFILES += btmtk_main.c btmtk_fw_log.c
ccflags-y += -I$(src)/include/ -I$(KERNEL_SRC)/include/ -I$(KERNEL_SRC)/drivers/bluetooth
ccflags-y += -DLINUX_OS
ccflags-y += -Werror
$(MOD_NAME)-objs := $(CFILES:.c=.o)
obj-m += $(MOD_NAME).o
ifneq ($(TARGET_BUILD_VARIANT), user)
ccflags-y += -DBTMTK_DEBUG_SOP
endif
#VPATH = /opt/toolchains/gcc-linaro-aarch64-linux-gnu-4.9-2014.09_linux
#UART_MOD_NAME = btmtk_uart
#UART_CFILES := \
# btmtk_uart_main.c
#$(UART_MOD_NAME)-objs := $(UART_CFILES:.c=.o)
###############################################################################
# Common
###############################################################################
#obj-m := $(UART_MOD_NAME).o
all:
make -C $(KERNEL_SRC) M=$(PWD) modules
clean:
make -C $(KERNEL_SRC) M=$(PWD) clean
# Check coding style
# export IGNORE_CODING_STYLE_RULES := NEW_TYPEDEFS,LEADING_SPACE,CODE_INDENT,SUSPECT_CODE_INDENT
ccs:
./util/checkpatch.pl -f ./sdio/btmtksdio.c
./util/checkpatch.pl -f ./include/sdio/btmtk_sdio.h
./util/checkpatch.pl -f ./include/btmtk_define.h
./util/checkpatch.pl -f ./include/btmtk_drv.h
./util/checkpatch.pl -f ./include/btmtk_chip_if.h
./util/checkpatch.pl -f ./include/btmtk_main.h
./util/checkpatch.pl -f ./include/btmtk_buffer_mode.h
./util/checkpatch.pl -f ./include/uart/tty/btmtk_uart_tty.h
./util/checkpatch.pl -f ./uart/btmtktty.c
./util/checkpatch.pl -f ./include/btmtk_fw_log.h
./util/checkpatch.pl -f ./include/btmtk_woble.h
./util/checkpatch.pl -f ./include/uart/btmtk_uart.h
./util/checkpatch.pl -f ./uart/btmtk_uart_main.c
./util/checkpatch.pl -f ./include/usb/btmtk_usb.h
./util/checkpatch.pl -f ./usb/btmtkusb.c
./util/checkpatch.pl -f btmtk_fw_log.c
./util/checkpatch.pl -f btmtk_main.c
./util/checkpatch.pl -f btmtk_buffer_mode.c
./util/checkpatch.pl -f btmtk_woble.c
./util/checkpatch.pl -f btmtk_chip_reset.c

View File

@@ -0,0 +1,21 @@
#Please follow the example pattern
#There are some SPACES between parameter and parameter
[Country Code]
[Index] BR_EDR_PWR_MODE, | EDR_MAX_TX_PWR, | BLE_DEFAULT_TX_PWR, | BLE_DEFAULT_TX_PWR_2M, | BLE_LR_S2, | BLE_LR_S8
[AU,SA]
[BT0] 1, 1.75, 1.5, 1, 1, 1
[BT1] 1, 2.75, 2.5, 2, 1, 1
[TW,US]
[BT0] 1, 14, 15, 16, 20, 20
[BT1] 1, 17, 17, 17, 20, 20
[JP]
[BT0] 0, 5.25, -3, -3, -2, -2
[BT1] 0, 5.5, -2.5, -2, -2, -2
[DE]
[BT0] 0, -32, -29, -29, -29, -29
[BT1] 0, -32, -29, -29, -29, -29

View File

@@ -0,0 +1,259 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2018 MediaTek Inc.
*/
#include "btmtk_buffer_mode.h"
static struct btmtk_buffer_mode_struct btmtk_buffer_mode;
static int btmtk_buffer_mode_check_auto_mode(struct btmtk_buffer_mode_struct *buffer_mode)
{
u16 addr = 1;
u8 value = 0;
if (buffer_mode->efuse_mode != AUTO_MODE)
return 0;
if (btmtk_efuse_read(buffer_mode->bdev, addr, &value)) {
BTMTK_WARN("read fail");
BTMTK_WARN("Use EEPROM Bin file mode");
buffer_mode->efuse_mode = BIN_FILE_MODE;
return -EIO;
}
if (value == ((buffer_mode->bdev->chip_id & 0xFF00) >> 8)) {
BTMTK_WARN("get efuse[1]: 0x%02x", value);
BTMTK_WARN("use efuse mode");
buffer_mode->efuse_mode = EFUSE_MODE;
} else {
BTMTK_WARN("get efuse[1]: 0x%02x", value);
BTMTK_WARN("Use EEPROM Bin file mode");
buffer_mode->efuse_mode = BIN_FILE_MODE;
}
return 0;
}
static int btmtk_buffer_mode_parse_mode(uint8_t *buf, size_t buf_size)
{
int efuse_mode = EFUSE_MODE;
char *p_buf = NULL;
char *ptr = NULL, *p = NULL;
if (!buf) {
BTMTK_WARN("buf is null");
return efuse_mode;
} else if (buf_size < (strlen(BUFFER_MODE_SWITCH_FIELD) + 2)) {
BTMTK_WARN("incorrect buf size(%d)", (int)buf_size);
return efuse_mode;
}
p_buf = kmalloc(buf_size + 1, GFP_KERNEL);
if (!p_buf)
return efuse_mode;
memcpy(p_buf, buf, buf_size);
p_buf[buf_size] = '\0';
/* find string */
p = ptr = strstr(p_buf, BUFFER_MODE_SWITCH_FIELD);
if (!ptr) {
BTMTK_ERR("Can't find %s", BUFFER_MODE_SWITCH_FIELD);
goto out;
}
if (p > p_buf) {
p--;
while ((*p == ' ') && (p != p_buf))
p--;
if (*p == '#') {
BTMTK_ERR("It's not EEPROM - Bin file mode");
goto out;
}
}
/* check access mode */
ptr += (strlen(BUFFER_MODE_SWITCH_FIELD) + 1);
BTMTK_WARN("It's EEPROM bin mode: %c", *ptr);
efuse_mode = *ptr - '0';
if (efuse_mode > AUTO_MODE)
efuse_mode = EFUSE_MODE;
out:
kfree(p_buf);
return efuse_mode;
}
static int btmtk_buffer_mode_set_addr(struct btmtk_buffer_mode_struct *buffer_mode)
{
u8 cmd[SET_ADDRESS_CMD_LEN] = {0x01, 0x1A, 0xFC, 0x06, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
u8 event[SET_ADDRESS_EVT_LEN] = {0x04, 0x0E, 0x04, 0x01, 0x1A, 0xFC, 0x00};
int ret = 0;
if (buffer_mode->bt0_mac[0] == 0x00 && buffer_mode->bt0_mac[1] == 0x00
&& buffer_mode->bt0_mac[2] == 0x00 && buffer_mode->bt0_mac[3] == 0x00
&& buffer_mode->bt0_mac[4] == 0x00 && buffer_mode->bt0_mac[5] == 0x00) {
BTMTK_WARN("BDAddr is Zero, not set");
} else {
cmd[SET_ADDRESS_CMD_PAYLOAD_OFFSET + 5] = buffer_mode->bt0_mac[0];
cmd[SET_ADDRESS_CMD_PAYLOAD_OFFSET + 4] = buffer_mode->bt0_mac[1];
cmd[SET_ADDRESS_CMD_PAYLOAD_OFFSET + 3] = buffer_mode->bt0_mac[2];
cmd[SET_ADDRESS_CMD_PAYLOAD_OFFSET + 2] = buffer_mode->bt0_mac[3];
cmd[SET_ADDRESS_CMD_PAYLOAD_OFFSET + 1] = buffer_mode->bt0_mac[4];
cmd[SET_ADDRESS_CMD_PAYLOAD_OFFSET] = buffer_mode->bt0_mac[5];
BTMTK_INFO("%s: SEND BDADDR = "MACSTR, __func__, MAC2STR(buffer_mode->bt0_mac));
ret = btmtk_main_send_cmd(buffer_mode->bdev,
cmd, SET_ADDRESS_CMD_LEN,
event, SET_ADDRESS_EVT_LEN,
0, 0, BTMTK_TX_CMD_FROM_DRV);
}
BTMTK_INFO("%s done", __func__);
return ret;
}
static int btmtk_buffer_mode_set_radio(struct btmtk_buffer_mode_struct *buffer_mode)
{
u8 cmd[SET_RADIO_CMD_LEN] = {0x01, 0x2C, 0xFC, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
u8 event[SET_RADIO_EVT_LEN] = {0x04, 0x0E, 0x04, 0x01, 0x2C, 0xFC, 0x00};
int ret = 0;
cmd[SET_RADIO_CMD_EDR_DEF_OFFSET] = buffer_mode->bt0_radio.radio_0 & 0x3F; /* edr_init_pwr */
cmd[SET_RADIO_CMD_BLE_OFFSET] = buffer_mode->bt0_radio.radio_2 & 0x3F; /* ble_default_pwr */
cmd[SET_RADIO_CMD_EDR_MAX_OFFSET] = buffer_mode->bt0_radio.radio_1 & 0x3F; /* edr_max_pwr */
cmd[SET_RADIO_CMD_EDR_MODE_OFFSET] = (buffer_mode->bt0_radio.radio_0 & 0xC0) >> 6; /* edr_pwr_mode */
BTMTK_INFO_RAW(cmd, SET_RADIO_CMD_LEN, "%s: Send", __func__);
ret = btmtk_main_send_cmd(buffer_mode->bdev,
cmd, SET_RADIO_CMD_LEN,
event, SET_RADIO_EVT_LEN,
0, 0, BTMTK_TX_CMD_FROM_DRV);
BTMTK_INFO("%s done", __func__);
return ret;
}
static int btmtk_buffer_mode_set_group_boundary(struct btmtk_buffer_mode_struct *buffer_mode)
{
u8 cmd[SET_GRP_CMD_LEN] = {0x01, 0xEA, 0xFC, 0x09, 0x02, 0x0B, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
u8 event[SET_GRP_EVT_LEN] = {0x04, 0x0E, 0x04, 0x01, 0xEA, 0xFC, 0x00};
int ret = 0;
memcpy(&cmd[SET_GRP_CMD_PAYLOAD_OFFSET], buffer_mode->bt0_ant0_grp_boundary, BUFFER_MODE_GROUP_LENGTH);
BTMTK_INFO_RAW(cmd, SET_GRP_CMD_LEN, "%s: Send", __func__);
ret = btmtk_main_send_cmd(buffer_mode->bdev,
cmd, SET_GRP_CMD_LEN,
event, SET_GRP_EVT_LEN,
0, 0, BTMTK_TX_CMD_FROM_DRV);
BTMTK_INFO("%s done", __func__);
return ret;
}
static int btmtk_buffer_mode_set_power_offset(struct btmtk_buffer_mode_struct *buffer_mode)
{
u8 cmd[SET_PWR_OFFSET_CMD_LEN] = {0x01, 0xEA, 0xFC, 0x0A,
0x02, 0x0A, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
u8 event[SET_PWR_OFFSET_EVT_LEN] = {0x04, 0x0E, 0x04, 0x01, 0xEA, 0xFC, 0x00};
int ret = 0;
memcpy(&cmd[SET_PWR_OFFSET_CMD_PAYLOAD_OFFSET], buffer_mode->bt0_ant0_pwr_offset, BUFFER_MODE_CAL_LENGTH);
BTMTK_INFO_RAW(cmd, SET_PWR_OFFSET_CMD_LEN, "%s: Send", __func__);
ret = btmtk_main_send_cmd(buffer_mode->bdev,
cmd, SET_PWR_OFFSET_CMD_LEN,
event, SET_PWR_OFFSET_EVT_LEN,
0, 0, BTMTK_TX_CMD_FROM_DRV);
BTMTK_INFO("%s done", __func__);
return ret;
}
int btmtk_buffer_mode_send(struct btmtk_buffer_mode_struct *buffer_mode)
{
int ret = 0;
if (buffer_mode == NULL) {
BTMTK_INFO("buffer_mode is NULL, not support");
return -EIO;
}
if (btmtk_buffer_mode_check_auto_mode(buffer_mode)) {
BTMTK_ERR("check auto mode failed");
return -EIO;
}
if (buffer_mode->efuse_mode == BIN_FILE_MODE) {
ret = btmtk_buffer_mode_set_addr(buffer_mode);
if (ret < 0)
BTMTK_ERR("set addr failed");
ret = btmtk_buffer_mode_set_radio(buffer_mode);
if (ret < 0)
BTMTK_ERR("set radio failed");
ret = btmtk_buffer_mode_set_group_boundary(buffer_mode);
if (ret < 0)
BTMTK_ERR("set group_boundary failed");
ret = btmtk_buffer_mode_set_power_offset(buffer_mode);
if (ret < 0)
BTMTK_ERR("set power_offset failed");
}
return 0;
}
void btmtk_buffer_mode_initialize(struct btmtk_dev *bdev, struct btmtk_buffer_mode_struct **buffer_mode)
{
int ret = 0;
u32 code_len = 0;
btmtk_buffer_mode.bdev = bdev;
ret = btmtk_load_code_from_setting_files(BUFFER_MODE_SWITCH_FILE, bdev->intf_dev, &code_len, bdev);
btmtk_buffer_mode.efuse_mode = btmtk_buffer_mode_parse_mode(bdev->setting_file, code_len);
if (btmtk_buffer_mode.efuse_mode == EFUSE_MODE)
return;
if (bdev->flavor)
(void)snprintf(btmtk_buffer_mode.file_name, MAX_BIN_FILE_NAME_LEN, "EEPROM_MT%04x_1a.bin",
bdev->chip_id & 0xffff);
else
(void)snprintf(btmtk_buffer_mode.file_name, MAX_BIN_FILE_NAME_LEN, "EEPROM_MT%04x_1.bin",
bdev->chip_id & 0xffff);
ret = btmtk_load_code_from_setting_files(btmtk_buffer_mode.file_name, bdev->intf_dev, &code_len, bdev);
if (ret < 0) {
BTMTK_ERR("set load %s failed", btmtk_buffer_mode.file_name);
return;
}
memcpy(btmtk_buffer_mode.bt0_mac, &bdev->setting_file[BT0_MAC_OFFSET],
BUFFER_MODE_MAC_LENGTH);
memcpy(btmtk_buffer_mode.bt1_mac, &bdev->setting_file[BT1_MAC_OFFSET],
BUFFER_MODE_MAC_LENGTH);
memcpy(&btmtk_buffer_mode.bt0_radio, &bdev->setting_file[BT0_RADIO_OFFSET],
BUFFER_MODE_RADIO_LENGTH);
memcpy(&btmtk_buffer_mode.bt1_radio, &bdev->setting_file[BT1_RADIO_OFFSET],
BUFFER_MODE_RADIO_LENGTH);
memcpy(btmtk_buffer_mode.bt0_ant0_grp_boundary, &bdev->setting_file[BT0_GROUP_ANT0_OFFSET],
BUFFER_MODE_GROUP_LENGTH);
memcpy(btmtk_buffer_mode.bt0_ant1_grp_boundary, &bdev->setting_file[BT0_GROUP_ANT1_OFFSET],
BUFFER_MODE_GROUP_LENGTH);
memcpy(btmtk_buffer_mode.bt1_ant0_grp_boundary, &bdev->setting_file[BT1_GROUP_ANT0_OFFSET],
BUFFER_MODE_GROUP_LENGTH);
memcpy(btmtk_buffer_mode.bt1_ant1_grp_boundary, &bdev->setting_file[BT1_GROUP_ANT1_OFFSET],
BUFFER_MODE_GROUP_LENGTH);
memcpy(btmtk_buffer_mode.bt0_ant0_pwr_offset, &bdev->setting_file[BT0_CAL_ANT0_OFFSET],
BUFFER_MODE_CAL_LENGTH);
memcpy(btmtk_buffer_mode.bt0_ant1_pwr_offset, &bdev->setting_file[BT0_CAL_ANT1_OFFSET],
BUFFER_MODE_CAL_LENGTH);
memcpy(btmtk_buffer_mode.bt1_ant0_pwr_offset, &bdev->setting_file[BT1_CAL_ANT0_OFFSET],
BUFFER_MODE_CAL_LENGTH);
memcpy(btmtk_buffer_mode.bt1_ant1_pwr_offset, &bdev->setting_file[BT1_CAL_ANT1_OFFSET],
BUFFER_MODE_CAL_LENGTH);
*buffer_mode = &btmtk_buffer_mode;
}

View File

@@ -0,0 +1,197 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2018 MediaTek Inc.
*/
#include "btmtk_chip_reset.h"
#if (KERNEL_VERSION(4, 15, 0) > LINUX_VERSION_CODE)
static void btmtk_reset_timer(unsigned long arg)
{
struct btmtk_dev *bdev = (struct btmtk_dev *)arg;
BTMTK_INFO("%s: chip_reset not trigger in %d seconds, trigger it directly",
__func__, CHIP_RESET_TIMEOUT);
schedule_work(&bdev->reset_waker);
}
#else
static void btmtk_reset_timer(struct timer_list *timer)
{
struct btmtk_dev *bdev = from_timer(bdev, timer, chip_reset_timer);
BTMTK_INFO("%s: chip_reset not trigger in %d seconds, trigger it directly",
__func__, CHIP_RESET_TIMEOUT);
schedule_work(&bdev->reset_waker);
}
#endif
void btmtk_reset_timer_add(struct btmtk_dev *bdev)
{
BTMTK_INFO("%s: create chip_reset timer", __func__);
#if (KERNEL_VERSION(4, 15, 0) > LINUX_VERSION_CODE)
init_timer(&bdev->chip_reset_timer);
bdev->chip_reset_timer.function = btmtk_reset_timer;
bdev->chip_reset_timer.data = (unsigned long)bdev;
#else
timer_setup(&bdev->chip_reset_timer, btmtk_reset_timer, 0);
#endif
}
void btmtk_reset_timer_update(struct btmtk_dev *bdev)
{
mod_timer(&bdev->chip_reset_timer, jiffies + CHIP_RESET_TIMEOUT * HZ);
}
void btmtk_reset_timer_del(struct btmtk_dev *bdev)
{
if (timer_pending(&bdev->chip_reset_timer)) {
del_timer_sync(&bdev->chip_reset_timer);
BTMTK_INFO("%s exit", __func__);
}
}
void btmtk_reset_waker(struct work_struct *work)
{
struct btmtk_dev *bdev = container_of(work, struct btmtk_dev, reset_waker);
struct btmtk_cif_state *cif_state = NULL;
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
int state = BTMTK_STATE_INIT;
int cif_event = 0, err = 0;
int cur = 0;
/* Check chip state is ok to do reset or not */
state = btmtk_get_chip_state(bdev);
if (state == BTMTK_STATE_SUSPEND) {
BTMTK_INFO("%s suspend state don't do chip reset!", __func__);
return;
}
if (state == BTMTK_STATE_PROBE) {
bmain_info->chip_reset_flag = 1;
BTMTK_INFO("%s just do whole chip reset in probe stage!", __func__);
}
btmtk_reset_timer_del(bdev);
if (atomic_read(&bmain_info->chip_reset) ||
atomic_read(&bmain_info->subsys_reset)) {
BTMTK_INFO("%s return, chip_reset = %d, subsys_reset = %d!", __func__,
atomic_read(&bmain_info->chip_reset), atomic_read(&bmain_info->subsys_reset));
return;
}
if (bmain_info->hif_hook.dump_debug_sop)
bmain_info->hif_hook.dump_debug_sop(bdev);
DUMP_TIME_STAMP("chip_reset_start");
cif_event = HIF_EVENT_SUBSYS_RESET;
if (BTMTK_CIF_IS_NULL(bdev, cif_event)) {
/* Error */
BTMTK_WARN("%s priv setting is NULL", __func__);
return;
}
if (!bdev->bt_cfg.support_dongle_reset) {
BTMTK_ERR("%s chip_reset is not support", __func__);
return;
}
cif_state = &bdev->cif_state[cif_event];
/* Set Entering state */
btmtk_set_chip_state((void *)bdev, cif_state->ops_enter);
BTMTK_INFO("%s: Receive a byte (0xFF)", __func__);
/* read interrupt EP15 CR */
bdev->sco_num = 0;
if (bmain_info->chip_reset_flag == 0 &&
atomic_read(&bmain_info->subsys_reset_conti_count) < BTMTK_MAX_SUBSYS_RESET_COUNT) {
if (bmain_info->hif_hook.subsys_reset) {
cur = atomic_cmpxchg(&bmain_info->subsys_reset, BTMTK_RESET_DONE, BTMTK_RESET_DOING);
if (cur == BTMTK_RESET_DOING) {
BTMTK_INFO("%s: subsys reset in progress, return", __func__);
return;
}
DUMP_TIME_STAMP("subsys_chip_reset_start");
err = bmain_info->hif_hook.subsys_reset(bdev);
atomic_set(&bmain_info->subsys_reset, BTMTK_RESET_DONE);
if (err < 0) {
BTMTK_INFO("subsys reset failed, do whole chip reset!");
goto L0RESET;
}
atomic_inc(&bmain_info->subsys_reset_count);
atomic_inc(&bmain_info->subsys_reset_conti_count);
DUMP_TIME_STAMP("subsys_chip_reset_end");
bmain_info->reset_stack_flag = HW_ERR_CODE_CHIP_RESET;
err = btmtk_cap_init(bdev);
if (err < 0) {
BTMTK_ERR("btmtk init failed!");
goto L0RESET;
}
err = btmtk_load_rom_patch(bdev);
if (err < 0) {
BTMTK_INFO("btmtk load rom patch failed!");
goto L0RESET;
}
btmtk_send_hw_err_to_host(bdev);
btmtk_woble_wake_unlock(bdev);
if (bmain_info->hif_hook.chip_reset_notify)
bmain_info->hif_hook.chip_reset_notify(bdev);
} else {
err = -1;
BTMTK_INFO("%s: Not support subsys chip reset", __func__);
goto L0RESET;
}
} else {
err = -1;
BTMTK_INFO("%s: chip_reset_flag is %d, subsys_reset_count %d",
__func__,
bmain_info->chip_reset_flag,
atomic_read(&bmain_info->subsys_reset_conti_count));
}
L0RESET:
if (err < 0) {
/* L0.5 reset failed or not support, do whole chip reset */
/* TODO: need to confirm with usb host when suspend fail, to do chip reset,
* because usb3.0 need to toggle reset pin after hub_event unfreeze,
* otherwise, it will not occur disconnect on Capy Platform. When Mstar
* chip has usb3.0 port, we will use Mstar platform to do comparison
* test, then found the final solution.
*/
/* msleep(2000); */
if (bmain_info->hif_hook.whole_reset) {
DUMP_TIME_STAMP("whole_chip_reset_start");
bmain_info->hif_hook.whole_reset(bdev);
atomic_inc(&bmain_info->whole_reset_count);
DUMP_TIME_STAMP("whole_chip_reset_end");
} else {
BTMTK_INFO("%s: Not support whole chip reset", __func__);
}
}
DUMP_TIME_STAMP("chip_reset_end");
/* Set End/Error state */
if (err < 0)
btmtk_set_chip_state((void *)bdev, cif_state->ops_error);
else
btmtk_set_chip_state((void *)bdev, cif_state->ops_end);
}
void btmtk_reset_trigger(struct btmtk_dev *bdev)
{
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
if (atomic_read(&bmain_info->chip_reset) ||
atomic_read(&bmain_info->subsys_reset)) {
BTMTK_INFO("%s return, chip_reset = %d, subsys_reset = %d!", __func__,
atomic_read(&bmain_info->chip_reset), atomic_read(&bmain_info->subsys_reset));
return;
}
schedule_work(&bdev->reset_waker);
}

View File

@@ -0,0 +1,992 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2018 MediaTek Inc.
*/
/* Define for proce node */
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
#include "btmtk_fw_log.h"
/*
* BT Logger Tool will turn on/off Firmware Picus log, and set 3 log levels (Low, SQC and Debug)
* For extention capability, driver does not check the value range.
*
* Combine log state and log level to below settings:
* - 0x00: OFF
* - 0x01: Low Power
* - 0x02: SQC
* - 0x03: Debug
*/
#define BT_FWLOG_DEFAULT_LEVEL 0x02
/* CTD BT log function and log status */
static wait_queue_head_t BT_log_wq;
static struct semaphore ioctl_mtx;
static uint8_t g_bt_on = BT_FWLOG_OFF;
static uint8_t g_log_on = BT_FWLOG_OFF;
static uint8_t g_log_level = BT_FWLOG_DEFAULT_LEVEL;
static uint8_t g_log_current = BT_FWLOG_OFF;
/* For fwlog dev node setting */
static struct btmtk_fops_fwlog *g_fwlog;
const struct file_operations BT_fopsfwlog = {
.open = btmtk_fops_openfwlog,
.release = btmtk_fops_closefwlog,
.read = btmtk_fops_readfwlog,
.write = btmtk_fops_writefwlog,
.poll = btmtk_fops_pollfwlog,
.unlocked_ioctl = btmtk_fops_unlocked_ioctlfwlog,
.compat_ioctl = btmtk_fops_compat_ioctlfwlog
};
/** read_write for proc */
static int btmtk_proc_show(struct seq_file *m, void *v);
static int btmtk_proc_open(struct inode *inode, struct file *file);
static int btmtk_proc_chip_reset_count_open(struct inode *inode, struct file *file);
static int btmtk_proc_chip_reset_count_show(struct seq_file *m, void *v);
#if (KERNEL_VERSION(5, 6, 0) > LINUX_VERSION_CODE)
static const struct file_operations BT_proc_fops = {
.open = btmtk_proc_open,
.read = seq_read,
.release = single_release,
};
static const struct file_operations BT_proc_chip_reset_count_fops = {
.open = btmtk_proc_chip_reset_count_open,
.read = seq_read,
.release = single_release,
};
#else
static const struct proc_ops BT_proc_fops = {
.proc_open = btmtk_proc_open,
.proc_read = seq_read,
.proc_release = single_release,
};
static const struct proc_ops BT_proc_chip_reset_count_fops = {
.proc_open = btmtk_proc_chip_reset_count_open,
.proc_read = seq_read,
.proc_release = single_release,
};
#endif
__weak int32_t btmtk_intcmd_wmt_utc_sync(void)
{
BTMTK_ERR("weak function %s not implement", __func__);
return -1;
}
__weak int32_t btmtk_intcmd_set_fw_log(uint8_t flag)
{
BTMTK_ERR("weak function %s not implement", __func__);
return -1;
}
void fw_log_bt_state_cb(uint8_t state)
{
uint8_t on_off;
on_off = (state == FUNC_ON) ? BT_FWLOG_ON : BT_FWLOG_OFF;
BTMTK_INFO("bt_on(0x%x) state(%d) on_off(0x%x)", g_bt_on, state, on_off);
if (g_bt_on != on_off) {
// changed
if (on_off == BT_FWLOG_OFF) { // should turn off
g_bt_on = BT_FWLOG_OFF;
BTMTK_INFO("BT func off, no need to send hci cmd");
} else {
g_bt_on = BT_FWLOG_ON;
if (g_log_current) {
btmtk_intcmd_set_fw_log(g_log_current);
btmtk_intcmd_wmt_utc_sync();
}
}
}
}
void fw_log_bt_event_cb(void)
{
BTMTK_DBG("fw_log_bt_event_cb");
wake_up_interruptible(&BT_log_wq);
}
static int btmtk_proc_show(struct seq_file *m, void *v)
{
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
(void)seq_printf(m, "patch version:%s driver version:%s\n", bmain_info->fw_version_str, VERSION);
return 0;
}
static int btmtk_proc_chip_reset_count_show(struct seq_file *m, void *v)
{
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
(void)seq_printf(m, "whole_reset_count=%d subsys_reset_count=%d\n",
atomic_read(&bmain_info->whole_reset_count),
atomic_read(&bmain_info->subsys_reset_count));
return 0;
}
static int btmtk_proc_open(struct inode *inode, struct file *file)
{
return single_open(file, btmtk_proc_show, NULL);
}
static int btmtk_proc_chip_reset_count_open(struct inode *inode, struct file *file)
{
return single_open(file, btmtk_proc_chip_reset_count_show, NULL);
}
static void btmtk_proc_create_new_entry(void)
{
struct proc_dir_entry *proc_show_entry = NULL;
struct proc_dir_entry *proc_show_chip_reset_count_entry = NULL;
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
BTMTK_INFO("%s, proc initialized", __func__);
bmain_info->proc_dir = proc_mkdir("stpbt", NULL);
if (bmain_info->proc_dir == NULL) {
BTMTK_ERR("Unable to creat dir");
return;
}
proc_show_entry = proc_create("bt_fw_version", 0640, bmain_info->proc_dir, &BT_proc_fops);
if (proc_show_entry == NULL) {
BTMTK_ERR("Unable to creat bt_fw_version node");
remove_proc_entry("stpbt", NULL);
}
proc_show_chip_reset_count_entry = proc_create(PROC_BT_CHIP_RESET_COUNT, 0640,
bmain_info->proc_dir, &BT_proc_chip_reset_count_fops);
if (proc_show_chip_reset_count_entry == NULL) {
BTMTK_ERR("Unable to creat %s node", PROC_BT_CHIP_RESET_COUNT);
remove_proc_entry(PROC_ROOT_DIR, NULL);
}
}
static void btmtk_proc_delete_entry(void)
{
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
if (bmain_info->proc_dir == NULL)
return;
remove_proc_entry("bt_fw_version", bmain_info->proc_dir);
BTMTK_INFO("%s, proc device node and folder removed!!", __func__);
remove_proc_entry(PROC_BT_CHIP_RESET_COUNT, bmain_info->proc_dir);
BTMTK_INFO("%s, proc device node and folder %s removed!!", __func__, PROC_BT_CHIP_RESET_COUNT);
remove_proc_entry(PROC_ROOT_DIR, NULL);
bmain_info->proc_dir = NULL;
}
int btmtk_fops_initfwlog(void)
{
#ifdef STATIC_REGISTER_FWLOG_NODE
static int BT_majorfwlog = FIXED_STPBT_MAJOR_DEV_ID + 1;
dev_t devIDfwlog = MKDEV(BT_majorfwlog, 1);
#else
static int BT_majorfwlog;
dev_t devIDfwlog = MKDEV(BT_majorfwlog, 0);
#endif
int ret = 0;
int cdevErr = 0;
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
BTMTK_INFO("%s: Start", __func__);
if (g_fwlog == NULL) {
g_fwlog = kzalloc(sizeof(*g_fwlog), GFP_KERNEL);
if (!g_fwlog) {
BTMTK_ERR("%s: alloc memory fail (g_data)", __func__);
return -1;
}
}
#ifdef STATIC_REGISTER_FWLOG_NODE
ret = register_chrdev_region(devIDfwlog, 1, "BT_chrdevfwlog");
if (ret) {
BTMTK_ERR("%s: fail to register chrdev(%x)", __func__, devIDfwlog);
goto alloc_error;
}
#else
ret = alloc_chrdev_region(&devIDfwlog, 0, 1, "BT_chrdevfwlog");
if (ret) {
BTMTK_ERR("%s: fail to allocate chrdev", __func__);
goto alloc_error;
}
#endif
BT_majorfwlog = MAJOR(devIDfwlog);
cdev_init(&g_fwlog->BT_cdevfwlog, &BT_fopsfwlog);
g_fwlog->BT_cdevfwlog.owner = THIS_MODULE;
cdevErr = cdev_add(&g_fwlog->BT_cdevfwlog, devIDfwlog, 1);
if (cdevErr)
goto cdv_error;
g_fwlog->pBTClass = class_create(THIS_MODULE, BT_FWLOG_DEV_NODE);
if (IS_ERR(g_fwlog->pBTClass)) {
BTMTK_ERR("%s: class create fail, error code(%ld)\n", __func__, PTR_ERR(g_fwlog->pBTClass));
goto create_node_error;
}
g_fwlog->pBTDevfwlog = device_create(g_fwlog->pBTClass, NULL, devIDfwlog, NULL,
"%s", BT_FWLOG_DEV_NODE);
if (IS_ERR(g_fwlog->pBTDevfwlog)) {
BTMTK_ERR("%s: device(stpbtfwlog) create fail, error code(%ld)", __func__,
PTR_ERR(g_fwlog->pBTDevfwlog));
goto create_node_error;
}
BTMTK_INFO("%s: BT_majorfwlog %d, devIDfwlog %d", __func__, BT_majorfwlog, devIDfwlog);
g_fwlog->g_devIDfwlog = devIDfwlog;
sema_init(&ioctl_mtx, 1);
//if (is_mt66xx(g_sbdev->chip_id)) {
if (bmain_info->hif_hook.log_init) {
bmain_info->hif_hook.log_init();
bmain_info->hif_hook.log_register_cb(fw_log_bt_event_cb);
init_waitqueue_head(&BT_log_wq);
} else {
spin_lock_init(&g_fwlog->fwlog_lock);
skb_queue_head_init(&g_fwlog->fwlog_queue);
skb_queue_head_init(&g_fwlog->usr_opcode_queue);//opcode
init_waitqueue_head(&(g_fwlog->fw_log_inq));
}
btmtk_proc_create_new_entry();
atomic_set(&bmain_info->fwlog_ref_cnt, 0);
BTMTK_INFO("%s: End", __func__);
return 0;
create_node_error:
if (g_fwlog->pBTClass) {
class_destroy(g_fwlog->pBTClass);
g_fwlog->pBTClass = NULL;
}
cdv_error:
if (cdevErr == 0)
cdev_del(&g_fwlog->BT_cdevfwlog);
if (ret == 0)
unregister_chrdev_region(devIDfwlog, 1);
alloc_error:
kfree(g_fwlog);
g_fwlog = NULL;
return -1;
}
int btmtk_fops_exitfwlog(void)
{
dev_t devIDfwlog = g_fwlog->g_devIDfwlog;
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
BTMTK_INFO("%s: Start\n", __func__);
//if (is_mt66xx(g_sbdev->chip_id))
if (bmain_info->hif_hook.log_deinit)
bmain_info->hif_hook.log_deinit();
if (g_fwlog->pBTDevfwlog) {
device_destroy(g_fwlog->pBTClass, devIDfwlog);
g_fwlog->pBTDevfwlog = NULL;
}
if (g_fwlog->pBTClass) {
class_destroy(g_fwlog->pBTClass);
g_fwlog->pBTClass = NULL;
}
BTMTK_INFO("%s: pBTDevfwlog, pBTClass done\n", __func__);
cdev_del(&g_fwlog->BT_cdevfwlog);
unregister_chrdev_region(devIDfwlog, 1);
BTMTK_INFO("%s: BT_chrdevfwlog driver removed.\n", __func__);
kfree(g_fwlog);
btmtk_proc_delete_entry();
return 0;
}
ssize_t btmtk_fops_readfwlog(struct file *filp, char __user *buf, size_t count, loff_t *f_pos)
{
int copyLen = 0;
ulong flags = 0;
struct sk_buff *skb = NULL;
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
//if (is_mt66xx(g_sbdev->chip_id)) {
if (bmain_info->hif_hook.log_read_to_user) {
copyLen = bmain_info->hif_hook.log_read_to_user(buf, count);
BTMTK_DBG("BT F/W log from Connsys, len %d", copyLen);
return copyLen;
}
/* picus read a queue, it may occur performace issue */
spin_lock_irqsave(&g_fwlog->fwlog_lock, flags);
if (skb_queue_len(&g_fwlog->fwlog_queue))
skb = skb_dequeue(&g_fwlog->fwlog_queue);
spin_unlock_irqrestore(&g_fwlog->fwlog_lock, flags);
if (skb == NULL)
return 0;
if (skb->len <= count) {
if (copy_to_user(buf, skb->data, skb->len))
BTMTK_ERR("%s: copy_to_user failed!", __func__);
copyLen = skb->len;
} else {
BTMTK_DBG("%s: socket buffer length error(count: %d, skb.len: %d)",
__func__, (int)count, skb->len);
}
kfree_skb(skb);
return copyLen;
}
ssize_t btmtk_fops_writefwlog(struct file *filp, const char __user *buf, size_t count, loff_t *f_pos)
{
int i = 0, len = 0, ret = -1;
int hci_idx = 0;
int vlen = 0, index = 3;
struct sk_buff *skb = NULL;
struct sk_buff *skb_opcode = NULL;
int state = BTMTK_STATE_INIT;
unsigned char fstate = BTMTK_FOPS_STATE_INIT;
u8 *i_fwlog_buf = NULL;
u8 *o_fwlog_buf = NULL;
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
struct btmtk_dev **pp_bdev = btmtk_get_pp_bdev();
/* only 7xxx will use writefwlog, 66xx not used */
/*if (is_mt66xx(bdev->chip_id)) {
* BTMTK_WARN("%s: not implement!", __func__);
* return 0;
* }
*/
i_fwlog_buf = kmalloc(HCI_MAX_COMMAND_BUF_SIZE, GFP_KERNEL);
if (!i_fwlog_buf) {
BTMTK_ERR("%s: alloc i_fwlog_buf failed", __func__);
ret = -ENOMEM;
goto exit;
}
o_fwlog_buf = kmalloc(HCI_MAX_COMMAND_SIZE, GFP_KERNEL);
if (!o_fwlog_buf) {
BTMTK_ERR("%s: alloc o_fwlog_buf failed", __func__);
ret = -ENOMEM;
goto exit;
}
if (count > HCI_MAX_COMMAND_BUF_SIZE) {
BTMTK_ERR("%s: your command is larger than maximum length, count = %zd",
__func__, count);
ret = -ENOMEM;
goto exit;
}
memset(i_fwlog_buf, 0, HCI_MAX_COMMAND_BUF_SIZE);
memset(o_fwlog_buf, 0, HCI_MAX_COMMAND_SIZE);
if (copy_from_user(i_fwlog_buf, buf, count) != 0) {
BTMTK_ERR("%s: Failed to copy data", __func__);
ret = -ENODATA;
goto exit;
}
/* For log level, EX: echo log_lvl=1 > /dev/stpbtfwlog */
if (strncmp(i_fwlog_buf, "log_lvl=", strlen("log_lvl=")) == 0) {
u8 val = *(i_fwlog_buf + strlen("log_lvl=")) - '0';
if (val > BTMTK_LOG_LVL_MAX || val <= 0) {
BTMTK_ERR("Got incorrect value for log level(%d)", val);
count = -EINVAL;
goto exit;
}
btmtk_log_lvl = val;
BTMTK_INFO("btmtk_log_lvl = %d", btmtk_log_lvl);
ret = count;
goto exit;
}
/* For bperf, EX: echo bperf=1 > /dev/stpbtfwlog */
if (strncmp(i_fwlog_buf, "bperf=", strlen("bperf=")) == 0) {
u8 val = *(i_fwlog_buf + strlen("bperf=")) - '0';
g_fwlog->btmtk_bluetooth_kpi = val;
BTMTK_INFO("%s: set bluetooth KPI feature(bperf) to %d", __func__, g_fwlog->btmtk_bluetooth_kpi);
ret = count;
goto exit;
}
if (strncmp(i_fwlog_buf, "chip_reset=", strlen("chip_reset=")) == 0) {
u8 val = *(i_fwlog_buf + strlen("chip_reset=")) - '0';
bmain_info->chip_reset_flag = val;
BTMTK_INFO("%s: set chip reset flag to %d", __func__, bmain_info->chip_reset_flag);
ret = count;
goto exit;
}
if (strncmp(i_fwlog_buf, "whole chip reset", strlen("whole chip reset")) == 0) {
BTMTK_INFO("whole chip reset start");
bmain_info->chip_reset_flag = 1;
btmtk_reset_trigger(pp_bdev[hci_idx]);
ret = count;
goto exit;
}
if (strncmp(i_fwlog_buf, "subsys chip reset", strlen("subsys chip reset")) == 0) {
BTMTK_INFO("subsys chip reset");
bmain_info->chip_reset_flag = 0;
btmtk_reset_trigger(pp_bdev[hci_idx]);
ret = count;
goto exit;
}
if (strncmp(i_fwlog_buf, "dump chip reset", strlen("dump chip reset")) == 0) {
BTMTK_INFO("subsys chip reset = %d", atomic_read(&bmain_info->subsys_reset_count));
BTMTK_INFO("whole chip reset = %d", atomic_read(&bmain_info->whole_reset_count));
ret = count;
goto exit;
}
if (strncmp(i_fwlog_buf, "dump btsnoop", strlen("dump btsnoop")) == 0) {
btmtk_hci_snoop_print_to_log();
ret = count;
goto exit;
}
#ifdef BTMTK_DEBUG_SOP
if (strncmp(i_fwlog_buf, "dump test", strlen("dump test")) == 0) {
btmtk_load_debug_sop_register(pp_bdev[hci_idx]->debug_sop_file_name,
pp_bdev[hci_idx]->intf_dev, pp_bdev[hci_idx]);
ret = count;
goto exit;
}
if (strncmp(i_fwlog_buf, "dump clean", strlen("dump clean")) == 0) {
btmtk_clean_debug_reg_file(pp_bdev[hci_idx]);
ret = count;
goto exit;
}
#endif
if (strncmp(i_fwlog_buf, "dump_debug=", strlen("dump_debug")) == 0) {
u8 val = *(i_fwlog_buf + strlen("dump_debug=")) - '0';
if (bmain_info->hif_hook.dump_debug_sop) {
BTMTK_INFO("%s: dump_debug(%s)", __func__,
(val == 0) ? "SLEEP" :
((val == 1) ? "WAKEUP" :
((val == 2) ? "NO_RESPONSE" : "ERROR")));
if (fstate != BTMTK_FOPS_STATE_OPENED) {
ret = bmain_info->hif_hook.open(pp_bdev[hci_idx]->hdev);
if (ret < 0) {
BTMTK_ERR("%s, cif_open failed", __func__);
ret = count;
goto exit;
}
}
bmain_info->hif_hook.dump_debug_sop(pp_bdev[hci_idx]);
if (fstate != BTMTK_FOPS_STATE_OPENED) {
ret = bmain_info->hif_hook.close(pp_bdev[hci_idx]->hdev);
if (ret < 0) {
BTMTK_ERR("%s, cif_close failed", __func__);
ret = count;
goto exit;
}
}
} else {
BTMTK_INFO("%s: not support", __func__);
}
ret = count;
goto exit;
}
/* hci input command format : echo 01 be fc 01 05 > /dev/stpbtfwlog */
/* We take the data from index three to end. */
for (i = 0; i < count; i++) {
char *pos = i_fwlog_buf + i;
char temp_str[3] = {'\0'};
long res = 0;
if (*pos == ' ' || *pos == '\t' || *pos == '\r' || *pos == '\n') {
continue;
} else if (*pos == '0' && (*(pos + 1) == 'x' || *(pos + 1) == 'X')) {
i++;
continue;
} else if (!(*pos >= '0' && *pos <= '9') && !(*pos >= 'A' && *pos <= 'F')
&& !(*pos >= 'a' && *pos <= 'f')) {
BTMTK_ERR("%s: There is an invalid input(%c)", __func__, *pos);
ret = -EINVAL;
goto exit;
}
temp_str[0] = *pos;
temp_str[1] = *(pos + 1);
i++;
ret = kstrtol(temp_str, 16, &res);
if (ret == 0)
o_fwlog_buf[len++] = (u8)res;
else
BTMTK_ERR("%s: Convert %s failed(%d)", __func__, temp_str, ret);
}
if (o_fwlog_buf[0] != HCI_COMMAND_PKT && o_fwlog_buf[0] != FWLOG_TYPE) {
BTMTK_ERR("%s: Not support 0x%02X yet", __func__, o_fwlog_buf[0]);
ret = -EPROTONOSUPPORT;
goto exit;
}
/* check HCI command length */
if (len > HCI_MAX_COMMAND_SIZE) {
BTMTK_ERR("%s: command is larger than max buf size, length = %d", __func__, len);
ret = -ENOMEM;
goto exit;
}
skb = alloc_skb(count + BT_SKB_RESERVE, GFP_ATOMIC);
if (!skb) {
BTMTK_ERR("%s allocate skb failed!!", __func__);
ret = -ENOMEM;
goto exit;
}
/* send HCI command */
bt_cb(skb)->pkt_type = HCI_COMMAND_PKT;
/* format */
/* 0xF0 XX XX 00 01 AA 10 BB CC CC CC CC ... */
/* XX XX total length */
/* 00 : hci index setting type */
/* AA hci index to indicate which hci send following command*/
/* 10 : raw data type*/
/* BB command length */
/* CC command */
if (o_fwlog_buf[0] == FWLOG_TYPE) {
while (index < ((o_fwlog_buf[2] << 8) + o_fwlog_buf[1])) {
switch (o_fwlog_buf[index]) {
case FWLOG_HCI_IDX: /* hci index */
vlen = o_fwlog_buf[index + 1];
hci_idx = o_fwlog_buf[index + 2];
BTMTK_DBG("%s: send to hci%d", __func__, hci_idx);
index += (FWLOG_ATTR_TL_SIZE + vlen);
break;
case FWLOG_TX: /* tx raw data */
vlen = o_fwlog_buf[index + 1];
memcpy(skb->data, o_fwlog_buf + index + FWLOG_ATTR_TL_SIZE, vlen);
skb->len = vlen;
index = index + FWLOG_ATTR_TL_SIZE + vlen;
break;
default:
BTMTK_WARN("Invalid opcode");
ret = -1;
goto free_skb;
}
}
} else {
memcpy(skb->data, o_fwlog_buf, len);
skb->len = len;
#if defined(DRV_RETURN_SPECIFIC_HCE_ONLY) && (DRV_RETURN_SPECIFIC_HCE_ONLY == 1)
// 0xFC26 is get link & profile information command.
if (*(uint16_t *)(o_fwlog_buf + 1) != 0xFC26) {
skb_opcode = alloc_skb(len + FWLOG_PRSV_LEN, GFP_ATOMIC);
if (!skb_opcode) {
BTMTK_ERR("%s allocate skb failed!!", __func__);
ret = -ENOMEM;
goto exit;
}
memcpy(skb_opcode->data, (o_fwlog_buf + 1), 2);
skb_queue_tail(&g_fwlog->usr_opcode_queue, skb_opcode);
BTMTK_INFO("opcode is %02x,%02x", skb_opcode->data[0], skb_opcode->data[1]);
}
#endif
}
/* won't send command if g_bdev not define */
if (pp_bdev[hci_idx]->hdev == NULL) {
BTMTK_DBG("pp_bdev[%d] not define", hci_idx);
ret = count;
goto free_skb;
}
state = btmtk_get_chip_state(pp_bdev[hci_idx]);
if (state != BTMTK_STATE_WORKING) {
BTMTK_WARN("%s: current is in suspend/resume/standby/dump/disconnect (%d).",
__func__, state);
ret = -EBADFD;
goto free_skb;
}
fstate = btmtk_fops_get_state(pp_bdev[hci_idx]);
if (fstate != BTMTK_FOPS_STATE_OPENED) {
BTMTK_WARN("%s: fops is not open yet(%d)!", __func__, fstate);
ret = -ENODEV;
goto free_skb;
}
if (pp_bdev[hci_idx]->power_state == BTMTK_DONGLE_STATE_POWER_OFF) {
BTMTK_WARN("%s: dongle state already power off, do not write", __func__);
ret = -EFAULT;
goto free_skb;
}
/* clean fwlog queue before enable picus log */
if (skb_queue_len(&g_fwlog->fwlog_queue) && skb->data[0] == 0x01
&& skb->data[1] == 0x5d && skb->data[2] == 0xfc && skb->data[4] == 0x00) {
skb_queue_purge(&g_fwlog->fwlog_queue);
BTMTK_INFO("clean fwlog_queue, skb_queue_len = %d", skb_queue_len(&g_fwlog->fwlog_queue));
}
btmtk_dispatch_fwlog_bluetooth_kpi(pp_bdev[hci_idx], skb->data, skb->len, KPI_WITHOUT_TYPE);
ret = bmain_info->hif_hook.send_cmd(pp_bdev[hci_idx], skb, 0, 0, (int)BTMTK_TX_PKT_FROM_HOST);
if (ret < 0) {
BTMTK_ERR("%s failed!!", __func__);
goto free_skb;
} else
BTMTK_INFO("%s: OK", __func__);
BTMTK_INFO("%s: Write end(len: %d)", __func__, len);
ret = count;
goto exit;
free_skb:
kfree_skb(skb);
skb = NULL;
/* clean opcode queue if bt is disable */
skb_queue_purge(&g_fwlog->usr_opcode_queue);
exit:
kfree(i_fwlog_buf);
kfree(o_fwlog_buf);
return ret; /* If input is correct should return the same length */
}
int btmtk_fops_openfwlog(struct inode *inode, struct file *file)
{
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
atomic_inc(&bmain_info->fwlog_ref_cnt);
BTMTK_INFO("%s: Start.", __func__);
return 0;
}
int btmtk_fops_closefwlog(struct inode *inode, struct file *file)
{
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
atomic_dec(&bmain_info->fwlog_ref_cnt);
BTMTK_INFO("%s: Start.", __func__);
return 0;
}
long btmtk_fops_unlocked_ioctlfwlog(struct file *filp, unsigned int cmd, unsigned long arg)
{
long retval = 0;
uint8_t log_tmp = BT_FWLOG_OFF;
/* only 66xx will use ioctlfwlog, 76xx not used */
/* if (!is_mt66xx(g_sbdev->chip_id)) {
* BTMTK_WARN("%s: not implement!", __func__);
* return 0;
*}
*/
down(&ioctl_mtx);
switch (cmd) {
case BT_FWLOG_IOC_ON_OFF:
/* Connsyslogger daemon dynamically enable/disable Picus log */
BTMTK_INFO("[ON_OFF]arg(%lu) bt_on(0x%x) log_on(0x%x) level(0x%x) log_cur(0x%x)",
arg, g_bt_on, g_log_on, g_log_level, g_log_current);
log_tmp = (arg == 0) ? BT_FWLOG_OFF : BT_FWLOG_ON;
if (log_tmp != g_log_on) { // changed
g_log_on = log_tmp;
g_log_current = g_log_on & g_log_level;
if (g_bt_on) {
retval = btmtk_intcmd_set_fw_log(g_log_current);
btmtk_intcmd_wmt_utc_sync();
}
}
break;
case BT_FWLOG_IOC_SET_LEVEL:
/* Connsyslogger daemon dynamically set Picus log level */
BTMTK_INFO("[SET_LEVEL]arg(%lu) bt_on(0x%x) log_on(0x%x) level(0x%x) log_cur(0x%x)",
arg, g_bt_on, g_log_on, g_log_level, g_log_current);
log_tmp = (uint8_t)arg;
if (log_tmp != g_log_level) {
g_log_level = log_tmp;
g_log_current = g_log_on & g_log_level;
if (g_bt_on & g_log_on) {
// driver on and log on
retval = btmtk_intcmd_set_fw_log(g_log_current);
btmtk_intcmd_wmt_utc_sync();
}
}
break;
case BT_FWLOG_IOC_GET_LEVEL:
retval = g_log_level;
BTMTK_INFO("[GET_LEVEL]return %ld", retval);
break;
default:
BTMTK_ERR("Unknown cmd: 0x%08x", cmd);
retval = -EOPNOTSUPP;
break;
}
up(&ioctl_mtx);
return retval;
}
long btmtk_fops_compat_ioctlfwlog(struct file *filp, unsigned int cmd, unsigned long arg)
{
return btmtk_fops_unlocked_ioctlfwlog(filp, cmd, arg);
}
unsigned int btmtk_fops_pollfwlog(struct file *file, poll_table *wait)
{
unsigned int mask = 0;
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
//if (is_mt66xx(g_sbdev->chip_id)) {
if (bmain_info->hif_hook.log_get_buf_size) {
poll_wait(file, &BT_log_wq, wait);
if (bmain_info->hif_hook.log_get_buf_size() > 0)
mask = (POLLIN | POLLRDNORM);
} else {
poll_wait(file, &g_fwlog->fw_log_inq, wait);
if (skb_queue_len(&g_fwlog->fwlog_queue) > 0)
mask |= POLLIN | POLLRDNORM; /* readable */
}
return mask;
}
static void btmtk_fwdump_wake_lock(void)
{
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
BTMTK_INFO("%s: enter", __func__);
__pm_stay_awake(bmain_info->fwdump_ws);
BTMTK_INFO("%s: exit", __func__);
}
static void btmtk_fwdump_wake_unlock(void)
{
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
BTMTK_INFO("%s: enter", __func__);
__pm_relax(bmain_info->fwdump_ws);
BTMTK_INFO("%s: exit", __func__);
}
static int btmtk_skb_enq_fwlog(struct btmtk_dev *bdev, void *src, u32 len, u8 type, struct sk_buff_head *queue)
{
struct sk_buff *skb_tmp = NULL;
ulong flags = 0;
int retry = 10, index = FWLOG_TL_SIZE;
do {
skb_tmp = alloc_skb(len + FWLOG_PRSV_LEN, GFP_ATOMIC);
if (skb_tmp != NULL)
break;
else if (retry <= 0) {
pr_info("%s: alloc_skb return 0, error", __func__);
return -ENOMEM;
}
pr_info("%s: alloc_skb return 0, error, retry = %d", __func__, retry);
} while (retry-- > 0);
if (type) {
skb_tmp->data[0] = FWLOG_TYPE;
/* 01 for dongle index */
skb_tmp->data[index] = FWLOG_DONGLE_IDX;
skb_tmp->data[index + 1] = sizeof(bdev->dongle_index);
skb_tmp->data[index + 2] = bdev->dongle_index;
index += (FWLOG_ATTR_RX_LEN_LEN + FWLOG_ATTR_TYPE_LEN);
/* 11 for rx data*/
skb_tmp->data[index] = FWLOG_RX;
if (type == HCI_ACLDATA_PKT || type == HCI_EVENT_PKT || type == HCI_COMMAND_PKT) {
skb_tmp->data[index + 1] = len & 0x00FF;
skb_tmp->data[index + 2] = (len & 0xFF00) >> 8;
skb_tmp->data[index + 3] = type;
index += (HCI_TYPE_SIZE + FWLOG_ATTR_RX_LEN_LEN + FWLOG_ATTR_TYPE_LEN);
} else {
skb_tmp->data[index + 1] = len & 0x00FF;
skb_tmp->data[index + 2] = (len & 0xFF00) >> 8;
index += (FWLOG_ATTR_RX_LEN_LEN + FWLOG_ATTR_TYPE_LEN);
}
memcpy(&skb_tmp->data[index], src, len);
skb_tmp->data[1] = (len + index - FWLOG_TL_SIZE) & 0x00FF;
skb_tmp->data[2] = ((len + index - FWLOG_TL_SIZE) & 0xFF00) >> 8;
skb_tmp->len = len + index;
} else {
memcpy(skb_tmp->data, src, len);
skb_tmp->len = len;
}
spin_lock_irqsave(&g_fwlog->fwlog_lock, flags);
skb_queue_tail(queue, skb_tmp);
spin_unlock_irqrestore(&g_fwlog->fwlog_lock, flags);
return 0;
}
int btmtk_dispatch_fwlog_bluetooth_kpi(struct btmtk_dev *bdev, u8 *buf, int len, u8 type)
{
static u8 fwlog_blocking_warn;
int ret = 0;
if (g_fwlog->btmtk_bluetooth_kpi &&
skb_queue_len(&g_fwlog->fwlog_queue) < FWLOG_BLUETOOTH_KPI_QUEUE_COUNT) {
/* sent event to queue, picus tool will log it for bluetooth KPI feature */
if (btmtk_skb_enq_fwlog(bdev, buf, len, type, &g_fwlog->fwlog_queue) == 0) {
wake_up_interruptible(&g_fwlog->fw_log_inq);
fwlog_blocking_warn = 0;
}
} else {
if (fwlog_blocking_warn == 0) {
fwlog_blocking_warn = 1;
pr_info("btmtk_usb fwlog queue size is full(bluetooth_kpi)");
}
}
return ret;
}
int btmtk_dispatch_fwlog(struct btmtk_dev *bdev, struct sk_buff *skb)
{
static u8 fwlog_picus_blocking_warn;
static u8 fwlog_fwdump_blocking_warn;
int state = BTMTK_STATE_INIT;
u8 hci_reset_event[HCI_RESET_EVT_LEN] = { 0x04, 0x0E, 0x04, 0x01, 0x03, 0x0c, 0x00 };
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
struct sk_buff *skb_opcode = NULL;
if ((bt_cb(skb)->pkt_type == HCI_ACLDATA_PKT) &&
skb->data[0] == 0x6f &&
skb->data[1] == 0xfc) {
static int dump_data_counter;
static int dump_data_length;
state = btmtk_get_chip_state(bdev);
if (state != BTMTK_STATE_FW_DUMP) {
BTMTK_INFO("%s: FW dump begin", __func__);
DUMP_TIME_STAMP("FW_dump_start");
btmtk_hci_snoop_print_to_log();
/* Print too much log, it may cause kernel panic. */
dump_data_counter = 0;
dump_data_length = 0;
btmtk_set_chip_state(bdev, BTMTK_STATE_FW_DUMP);
btmtk_fwdump_wake_lock();
}
dump_data_counter++;
dump_data_length += skb->len;
/* coredump */
/* print dump data to console */
if (dump_data_counter % 1000 == 0) {
BTMTK_INFO("%s: FW dump on-going, total_packet = %d, total_length = %d",
__func__, dump_data_counter, dump_data_length);
}
/* print dump data to console */
if (dump_data_counter < 20)
BTMTK_INFO("%s: FW dump data (%d): %s",
__func__, dump_data_counter, &skb->data[4]);
/* In the new generation, we will check the keyword of coredump (; coredump end)
* Such as : 79xx
*/
if (skb->data[skb->len - 4] == 'e' &&
skb->data[skb->len - 3] == 'n' &&
skb->data[skb->len - 2] == 'd') {
/* This is the latest coredump packet. */
BTMTK_INFO("%s: FW dump end, dump_data_counter = %d", __func__, dump_data_counter);
/* TODO: Chip reset*/
bmain_info->reset_stack_flag = HW_ERR_CODE_CORE_DUMP;
btmtk_fwdump_wake_unlock();
DUMP_TIME_STAMP("FW_dump_end");
if (bmain_info->hif_hook.waker_notify)
bmain_info->hif_hook.waker_notify(bdev);
}
if (skb_queue_len(&g_fwlog->fwlog_queue) < FWLOG_ASSERT_QUEUE_COUNT) {
/* sent picus data to queue, picus tool will log it */
if (btmtk_skb_enq_fwlog(bdev, skb->data, skb->len, 0, &g_fwlog->fwlog_queue) == 0) {
wake_up_interruptible(&g_fwlog->fw_log_inq);
fwlog_fwdump_blocking_warn = 0;
}
} else {
if (fwlog_fwdump_blocking_warn == 0) {
fwlog_fwdump_blocking_warn = 1;
pr_info("btmtk fwlog queue size is full(coredump)");
}
}
/* change coredump's ACL handle to FF F0 */
skb->data[0] = 0xFF;
skb->data[1] = 0xF0;
} else if ((bt_cb(skb)->pkt_type == HCI_ACLDATA_PKT) &&
(skb->data[0] == 0xff || skb->data[0] == 0xfe) &&
skb->data[1] == 0x05 &&
!bdev->bt_cfg.support_picus_to_host) {
/* picus or syslog */
if (skb_queue_len(&g_fwlog->fwlog_queue) < FWLOG_QUEUE_COUNT) {
if (btmtk_skb_enq_fwlog(bdev, skb->data, skb->len,
FWLOG_TYPE, &g_fwlog->fwlog_queue) == 0) {
wake_up_interruptible(&g_fwlog->fw_log_inq);
fwlog_picus_blocking_warn = 0;
}
} else {
if (fwlog_picus_blocking_warn == 0) {
fwlog_picus_blocking_warn = 1;
pr_info("btmtk fwlog queue size is full(picus)");
}
}
return 1;
} else if (memcmp(skb->data, &hci_reset_event[1], HCI_RESET_EVT_LEN - 1) == 0) {
BTMTK_INFO("%s: Get RESET_EVENT", __func__);
bdev->get_hci_reset = 1;
atomic_set(&bmain_info->subsys_reset_conti_count, 0);
}
/* filter event from usr cmd */
if ((bt_cb(skb)->pkt_type == HCI_EVENT_PKT) &&
skb->data[0] == 0x0E) {
if (skb_queue_len(&g_fwlog->usr_opcode_queue)) {
BTMTK_INFO("%s: opcode queue len is %d", __func__,
skb_queue_len(&g_fwlog->usr_opcode_queue));
skb_opcode = skb_dequeue(&g_fwlog->usr_opcode_queue);
}
if (skb_opcode == NULL)
return 0;
if (skb_opcode->data[0] == skb->data[3] &&
skb_opcode->data[1] == skb->data[4]) {
BTMTK_INFO_RAW(skb->data, skb->len, "%s: Discard event from user hci command - ", __func__);
#if defined(DRV_RETURN_SPECIFIC_HCE_ONLY) && (DRV_RETURN_SPECIFIC_HCE_ONLY == 0)
// should return to upper layer tool
if (btmtk_skb_enq_fwlog(bdev, skb->data, skb->len, FWLOG_TYPE,
&g_fwlog->fwlog_queue) == 0) {
wake_up_interruptible(&g_fwlog->fw_log_inq);
}
kfree_skb(skb_opcode);
#endif
return 1;
}
BTMTK_INFO("%s: check opcode fail!", __func__);
skb_queue_head(&g_fwlog->usr_opcode_queue, skb_opcode);
}
return 0;
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,993 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2018 MediaTek Inc.
*/
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/input.h>
#include <linux/pm_wakeup.h>
#include <linux/interrupt.h>
#include "btmtk_woble.h"
static int is_support_unify_woble(struct btmtk_dev *bdev)
{
if (bdev->bt_cfg.support_unify_woble) {
if (is_mt7902(bdev->chip_id) || is_mt7922(bdev->chip_id) ||
is_mt6639(bdev->chip_id) || is_mt7961(bdev->chip_id))
return 1;
else
return 0;
} else {
return 0;
}
}
static void btmtk_woble_wake_lock(struct btmtk_dev *bdev)
{
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
if (bdev->bt_cfg.support_woble_wakelock) {
BTMTK_INFO("%s: enter", __func__);
__pm_stay_awake(bmain_info->woble_ws);
BTMTK_INFO("%s: exit", __func__);
}
}
void btmtk_woble_wake_unlock(struct btmtk_dev *bdev)
{
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
if (bdev->bt_cfg.support_woble_wakelock) {
BTMTK_INFO("%s: enter", __func__);
__pm_relax(bmain_info->woble_ws);
BTMTK_INFO("%s: exit", __func__);
}
}
#if WAKEUP_BT_IRQ
void btmtk_sdio_irq_wake_lock_timeout(struct btmtk_dev *bdev)
{
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
BTMTK_INFO("%s: enter", __func__);
__pm_wakeup_event(bmain_info->irq_ws, WAIT_POWERKEY_TIMEOUT);
BTMTK_INFO("%s: exit", __func__);
}
#endif
int btmtk_send_apcf_reserved(struct btmtk_dev *bdev)
{
u8 reserve_apcf_cmd[RES_APCF_CMD_LEN] = { 0x01, 0xC9, 0xFC, 0x05, 0x01, 0x30, 0x02, 0x61, 0x02 };
u8 reserve_apcf_event[RES_APCF_EVT_LEN] = { 0x04, 0xE6, 0x02, 0x08, 0x11 };
int ret = 0;
if (bdev == NULL) {
BTMTK_ERR("%s: Incorrect bdev", __func__);
ret = -1;
goto exit;
}
if (is_support_unify_woble(bdev)) {
if (is_mt6639(bdev->chip_id) || is_mt7902(bdev->chip_id)
|| is_mt7922(bdev->chip_id) || is_mt7961(bdev->chip_id))
ret = btmtk_main_send_cmd(bdev, reserve_apcf_cmd, RES_APCF_CMD_LEN,
reserve_apcf_event, RES_APCF_EVT_LEN, 0, 0,
BTMTK_TX_PKT_FROM_HOST);
else
BTMTK_WARN("%s: not support for 0x%x", __func__, bdev->chip_id);
BTMTK_INFO("%s: ret %d", __func__, ret);
}
exit:
return ret;
}
static int btmtk_send_woble_read_BDADDR_cmd(struct btmtk_dev *bdev)
{
u8 cmd[READ_ADDRESS_CMD_LEN] = { 0x01, 0x09, 0x10, 0x00 };
u8 event[READ_ADDRESS_EVT_HDR_LEN] = { 0x04, 0x0E, 0x0A, 0x01, 0x09, 0x10, 0x00, /* AA, BB, CC, DD, EE, FF */ };
int i;
int ret = -1;
BTMTK_INFO("%s: begin", __func__);
if (bdev == NULL || bdev->io_buf == NULL) {
BTMTK_ERR("%s: Incorrect bdev", __func__);
return ret;
}
for (i = 0; i < BD_ADDRESS_SIZE; i++) {
if (bdev->bdaddr[i] != 0) {
ret = 0;
goto done;
}
}
ret = btmtk_main_send_cmd(bdev,
cmd, READ_ADDRESS_CMD_LEN,
event, READ_ADDRESS_EVT_HDR_LEN,
0, 0, BTMTK_TX_PKT_FROM_HOST);
/*BD address will get in btmtk_rx_work*/
if (ret < 0)
BTMTK_ERR("%s: failed(%d)", __func__, ret);
done:
BTMTK_INFO("%s, end, ret = %d", __func__, ret);
return ret;
}
static int btmtk_send_unify_woble_suspend_default_cmd(struct btmtk_dev *bdev)
{
u8 cmd[WOBLE_ENABLE_DEFAULT_CMD_LEN] = { 0x01, 0xC9, 0xFC, 0x24, 0x01, 0x20, 0x02, 0x00, 0x01,
0x02, 0x01, 0x00, 0x05, 0x10, 0x00, 0x00, 0x40, 0x06,
0x02, 0x40, 0x0A, 0x02, 0x41, 0x0F, 0x05, 0x24, 0x20,
0x04, 0x32, 0x00, 0x09, 0x26, 0xC0, 0x12, 0x00, 0x00,
0x12, 0x00, 0x00, 0x00};
u8 event[WOBLE_ENABLE_DEFAULT_EVT_LEN] = { 0x04, 0xE6, 0x02, 0x08, 0x00 };
int ret = 0; /* if successful, 0 */
BTMTK_INFO("%s: begin", __func__);
ret = btmtk_main_send_cmd(bdev,
cmd, WOBLE_ENABLE_DEFAULT_CMD_LEN,
event, WOBLE_ENABLE_DEFAULT_EVT_LEN,
0, 0, BTMTK_TX_PKT_FROM_HOST);
if (ret < 0)
BTMTK_ERR("%s: failed(%d)", __func__, ret);
BTMTK_INFO("%s: end. ret = %d", __func__, ret);
return ret;
}
static int btmtk_send_unify_woble_resume_default_cmd(struct btmtk_dev *bdev)
{
u8 cmd[WOBLE_DISABLE_DEFAULT_CMD_LEN] = { 0x01, 0xC9, 0xFC, 0x05, 0x01, 0x21, 0x02, 0x00, 0x00 };
u8 event[WOBLE_DISABLE_DEFAULT_EVT_LEN] = { 0x04, 0xE6, 0x02, 0x08, 0x01 };
int ret = 0; /* if successful, 0 */
BTMTK_INFO("%s: begin", __func__);
ret = btmtk_main_send_cmd(bdev,
cmd, WOBLE_DISABLE_DEFAULT_CMD_LEN,
event, WOBLE_DISABLE_DEFAULT_EVT_LEN,
0, 0, BTMTK_TX_PKT_FROM_HOST);
if (ret < 0)
BTMTK_ERR("%s: failed(%d)", __func__, ret);
BTMTK_INFO("%s: end. ret = %d", __func__, ret);
return ret;
}
static int btmtk_send_woble_suspend_cmd(struct btmtk_dev *bdev)
{
/* radio off cmd with wobx_mode_disable, used when unify woble off */
u8 radio_off_cmd[RADIO_OFF_CMD_LEN] = { 0x01, 0xC9, 0xFC, 0x05, 0x01, 0x20, 0x02, 0x00, 0x00 };
u8 event[RADIO_OFF_EVT_LEN] = { 0x04, 0xE6, 0x02, 0x08, 0x00 };
int ret = 0; /* if successful, 0 */
BTMTK_INFO("%s: not support woble, send radio off cmd", __func__);
ret = btmtk_main_send_cmd(bdev,
radio_off_cmd, RADIO_OFF_CMD_LEN,
event, RADIO_OFF_EVT_LEN,
0, 0, BTMTK_TX_PKT_FROM_HOST);
if (ret < 0)
BTMTK_ERR("%s: failed(%d)", __func__, ret);
return ret;
}
static int btmtk_send_woble_resume_cmd(struct btmtk_dev *bdev)
{
/* radio on cmd with wobx_mode_disable, used when unify woble off */
u8 radio_on_cmd[RADIO_ON_CMD_LEN] = { 0x01, 0xC9, 0xFC, 0x05, 0x01, 0x21, 0x02, 0x00, 0x00 };
u8 event[RADIO_ON_EVT_LEN] = { 0x04, 0xE6, 0x02, 0x08, 0x01 };
int ret = 0; /* if successful, 0 */
BTMTK_INFO("%s: begin", __func__);
ret = btmtk_main_send_cmd(bdev,
radio_on_cmd, RADIO_ON_CMD_LEN,
event, RADIO_ON_EVT_LEN,
0, 0, BTMTK_TX_PKT_FROM_HOST);
if (ret < 0)
BTMTK_ERR("%s: failed(%d)", __func__, ret);
return ret;
}
static int btmtk_set_Woble_APCF_filter_parameter(struct btmtk_dev *bdev)
{
u8 cmd[APCF_FILTER_CMD_LEN] = { 0x01, 0x57, 0xFD, 0x0A,
0x01, 0x00, 0x0A, 0x20, 0x00, 0x20, 0x00, 0x01, 0x80, 0x00 };
u8 event[APCF_FILTER_EVT_HDR_LEN] = { 0x04, 0x0E, 0x07,
0x01, 0x57, 0xFD, 0x00, 0x01/*, 00, 63*/ };
int ret = -1;
BTMTK_INFO("%s: begin", __func__);
ret = btmtk_main_send_cmd(bdev, cmd, APCF_FILTER_CMD_LEN,
event, APCF_FILTER_EVT_HDR_LEN, 0, 0, BTMTK_TX_PKT_FROM_HOST);
if (ret < 0)
BTMTK_ERR("%s: end ret %d", __func__, ret);
else
ret = 0;
BTMTK_INFO("%s: end ret=%d", __func__, ret);
return ret;
}
/**
* Set APCF manufacturer data and filter parameter
*/
static int btmtk_set_Woble_APCF(struct btmtk_woble *bt_woble)
{
u8 manufactur_data[APCF_CMD_LEN] = { 0x01, 0x57, 0xFD, 0x27, 0x06, 0x00, 0x0A,
0x46, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x43, 0x52, 0x4B, 0x54, 0x4D,
0xFF, 0xFF, 0x00, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
0x00, 0x00, 0x00, 0x00, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF };
u8 event[APCF_EVT_HDR_LEN] = { 0x04, 0x0E, 0x07, 0x01, 0x57, 0xFD, 0x00, /* 0x06 00 63 */ };
int ret = -1;
u8 i = 0;
struct btmtk_dev *bdev = bt_woble->bdev;
BTMTK_INFO("%s: woble_setting_apcf[0].length %d",
__func__, bt_woble->woble_setting_apcf[0].length);
/* start to send apcf cmd from woble setting file */
if (bt_woble->woble_setting_apcf[0].length) {
for (i = 0; i < WOBLE_SETTING_COUNT; i++) {
if (!bt_woble->woble_setting_apcf[i].length)
continue;
BTMTK_INFO("%s: apcf_fill_mac[%d].content[0] = 0x%02x", __func__, i,
bt_woble->woble_setting_apcf_fill_mac[i].content[0]);
BTMTK_INFO("%s: apcf_fill_mac_location[%d].length = %d", __func__, i,
bt_woble->woble_setting_apcf_fill_mac_location[i].length);
if ((bt_woble->woble_setting_apcf_fill_mac[i].content[0] == 1) &&
bt_woble->woble_setting_apcf_fill_mac_location[i].length) {
/* need add BD addr to apcf cmd */
memcpy(bt_woble->woble_setting_apcf[i].content +
(*bt_woble->woble_setting_apcf_fill_mac_location[i].content + 1),
bdev->bdaddr, BD_ADDRESS_SIZE);
BTMTK_INFO("%s: apcf[%d], add local BDADDR to location %d", __func__, i,
(*bt_woble->woble_setting_apcf_fill_mac_location[i].content));
}
#if CFG_SHOW_FULL_MACADDR
BTMTK_INFO_RAW(bt_woble->woble_setting_apcf[i].content, bt_woble->woble_setting_apcf[i].length,
"Send woble_setting_apcf[%d] ", i);
#endif
ret = btmtk_main_send_cmd(bdev, bt_woble->woble_setting_apcf[i].content,
bt_woble->woble_setting_apcf[i].length, event, APCF_EVT_HDR_LEN, 0, 0,
BTMTK_TX_PKT_FROM_HOST);
if (ret < 0) {
BTMTK_ERR("%s: manufactur_data error ret %d", __func__, ret);
return ret;
}
}
} else { /* use default */
BTMTK_INFO("%s: use default manufactur data", __func__);
memcpy(manufactur_data + 10, bdev->bdaddr, BD_ADDRESS_SIZE);
ret = btmtk_main_send_cmd(bdev, manufactur_data, APCF_CMD_LEN,
event, APCF_EVT_HDR_LEN, 0, 0, BTMTK_TX_PKT_FROM_HOST);
if (ret < 0) {
BTMTK_ERR("%s: manufactur_data error ret %d", __func__, ret);
return ret;
}
ret = btmtk_set_Woble_APCF_filter_parameter(bdev);
}
BTMTK_INFO("%s: end ret=%d", __func__, ret);
return 0;
}
static int btmtk_set_Woble_Radio_Off(struct btmtk_woble *bt_woble)
{
int ret = -1;
int length = 0;
char *radio_off = NULL;
struct btmtk_dev *bdev = bt_woble->bdev;
BTMTK_INFO("%s: woble_setting_radio_off.length %d", __func__,
bt_woble->woble_setting_radio_off.length);
if (bt_woble->woble_setting_radio_off.length) {
/* start to send radio off cmd from woble setting file */
length = bt_woble->woble_setting_radio_off.length +
bt_woble->woble_setting_wakeup_type.length;
radio_off = kzalloc(length, GFP_KERNEL);
if (!radio_off) {
BTMTK_ERR("%s: alloc memory fail (radio_off)",
__func__);
ret = -ENOMEM;
goto Finish;
}
memcpy(radio_off,
bt_woble->woble_setting_radio_off.content,
bt_woble->woble_setting_radio_off.length);
if (bt_woble->woble_setting_wakeup_type.length) {
memcpy(radio_off + bt_woble->woble_setting_radio_off.length,
bt_woble->woble_setting_wakeup_type.content,
bt_woble->woble_setting_wakeup_type.length);
radio_off[3] += bt_woble->woble_setting_wakeup_type.length;
}
BTMTK_INFO_RAW(radio_off, length, "Send radio off");
ret = btmtk_main_send_cmd(bdev, radio_off, length,
bt_woble->woble_setting_radio_off_comp_event.content,
bt_woble->woble_setting_radio_off_comp_event.length, 0, 0,
BTMTK_TX_PKT_FROM_HOST);
kfree(radio_off);
radio_off = NULL;
} else { /* use default */
BTMTK_INFO("%s: use default radio off cmd", __func__);
ret = btmtk_send_unify_woble_suspend_default_cmd(bdev);
}
Finish:
BTMTK_INFO("%s, end ret=%d", __func__, ret);
return ret;
}
static int btmtk_set_Woble_Radio_On(struct btmtk_woble *bt_woble)
{
int ret = -1;
struct btmtk_dev *bdev = bt_woble->bdev;
BTMTK_INFO("%s: woble_setting_radio_on.length %d", __func__,
bt_woble->woble_setting_radio_on.length);
if (bt_woble->woble_setting_radio_on.length) {
/* start to send radio on cmd from woble setting file */
BTMTK_INFO_RAW(bt_woble->woble_setting_radio_on.content,
bt_woble->woble_setting_radio_on.length, "send radio on");
ret = btmtk_main_send_cmd(bdev, bt_woble->woble_setting_radio_on.content,
bt_woble->woble_setting_radio_on.length,
bt_woble->woble_setting_radio_on_comp_event.content,
bt_woble->woble_setting_radio_on_comp_event.length, 0, 0,
BTMTK_TX_PKT_FROM_HOST);
} else { /* use default */
BTMTK_WARN("%s: use default radio on cmd", __func__);
ret = btmtk_send_unify_woble_resume_default_cmd(bdev);
}
BTMTK_INFO("%s, end ret=%d", __func__, ret);
return ret;
}
static int btmtk_del_Woble_APCF_index(struct btmtk_dev *bdev)
{
u8 cmd[APCF_DELETE_CMD_LEN] = { 0x01, 0x57, 0xFD, 0x03, 0x01, 0x01, 0x0A };
u8 event[APCF_DELETE_EVT_HDR_LEN] = { 0x04, 0x0e, 0x07, 0x01, 0x57, 0xfd, 0x00, 0x01, /* 00, 63 */ };
int ret = -1;
BTMTK_INFO("%s, enter", __func__);
ret = btmtk_main_send_cmd(bdev,
cmd, APCF_DELETE_CMD_LEN,
event, APCF_DELETE_EVT_HDR_LEN,
0, 0, BTMTK_TX_PKT_FROM_HOST);
if (ret < 0)
BTMTK_ERR("%s: got error %d", __func__, ret);
BTMTK_INFO("%s, end", __func__);
return ret;
}
static int btmtk_set_Woble_APCF_Resume(struct btmtk_woble *bt_woble)
{
u8 event[APCF_RESUME_EVT_HDR_LEN] = { 0x04, 0x0e, 0x07, 0x01, 0x57, 0xfd, 0x00 };
u8 i = 0;
int ret = -1;
struct btmtk_dev *bdev = bt_woble->bdev;
BTMTK_INFO("%s, enter, bt_woble->woble_setting_apcf_resume[0].length= %d",
__func__, bt_woble->woble_setting_apcf_resume[0].length);
if (bt_woble->woble_setting_apcf_resume[0].length) {
BTMTK_INFO("%s: handle leave woble apcf from file", __func__);
for (i = 0; i < WOBLE_SETTING_COUNT; i++) {
if (!bt_woble->woble_setting_apcf_resume[i].length)
continue;
BTMTK_INFO_RAW(bt_woble->woble_setting_apcf_resume[i].content,
bt_woble->woble_setting_apcf_resume[i].length,
"%s: send apcf resume %d:", __func__, i);
ret = btmtk_main_send_cmd(bdev,
bt_woble->woble_setting_apcf_resume[i].content,
bt_woble->woble_setting_apcf_resume[i].length,
event, APCF_RESUME_EVT_HDR_LEN,
0, 0, BTMTK_TX_PKT_FROM_HOST);
if (ret < 0) {
BTMTK_ERR("%s: Send apcf resume fail %d", __func__, ret);
return ret;
}
}
} else { /* use default */
BTMTK_WARN("%s: use default apcf resume cmd", __func__);
ret = btmtk_del_Woble_APCF_index(bdev);
if (ret < 0)
BTMTK_ERR("%s: btmtk_del_Woble_APCF_index return fail %d", __func__, ret);
}
BTMTK_INFO("%s, end", __func__);
return ret;
}
static int btmtk_load_woble_setting(char *bin_name,
struct device *dev, u32 *code_len, struct btmtk_woble *bt_woble)
{
int err;
struct btmtk_dev *bdev = bt_woble->bdev;
*code_len = 0;
err = btmtk_load_code_from_setting_files(bin_name, dev, code_len, bdev);
if (err) {
BTMTK_ERR("woble_setting btmtk_load_code_from_setting_files failed!!");
goto LOAD_END;
}
err = btmtk_load_fw_cfg_setting("APCF",
bt_woble->woble_setting_apcf, WOBLE_SETTING_COUNT, bdev->setting_file, FW_CFG_INX_LEN_2);
if (err)
goto LOAD_END;
err = btmtk_load_fw_cfg_setting("APCF_ADD_MAC",
bt_woble->woble_setting_apcf_fill_mac, WOBLE_SETTING_COUNT,
bdev->setting_file, FW_CFG_INX_LEN_2);
if (err)
goto LOAD_END;
err = btmtk_load_fw_cfg_setting("APCF_ADD_MAC_LOCATION",
bt_woble->woble_setting_apcf_fill_mac_location, WOBLE_SETTING_COUNT,
bdev->setting_file, FW_CFG_INX_LEN_2);
if (err)
goto LOAD_END;
err = btmtk_load_fw_cfg_setting("RADIOOFF", &bt_woble->woble_setting_radio_off, 1,
bdev->setting_file, FW_CFG_INX_LEN_2);
if (err)
goto LOAD_END;
switch (bdev->bt_cfg.unify_woble_type) {
case 0:
err = btmtk_load_fw_cfg_setting("WAKEUP_TYPE_LEGACY", &bt_woble->woble_setting_wakeup_type, 1,
bdev->setting_file, FW_CFG_INX_LEN_2);
break;
case 1:
err = btmtk_load_fw_cfg_setting("WAKEUP_TYPE_WAVEFORM", &bt_woble->woble_setting_wakeup_type, 1,
bdev->setting_file, FW_CFG_INX_LEN_2);
break;
case 2:
err = btmtk_load_fw_cfg_setting("WAKEUP_TYPE_IR", &bt_woble->woble_setting_wakeup_type, 1,
bdev->setting_file, FW_CFG_INX_LEN_2);
break;
default:
BTMTK_WARN("%s: unify_woble_type unknown(%d)", __func__, bdev->bt_cfg.unify_woble_type);
}
if (err)
BTMTK_WARN("%s: Parse unify_woble_type(%d) failed", __func__, bdev->bt_cfg.unify_woble_type);
err = btmtk_load_fw_cfg_setting("RADIOOFF_STATUS_EVENT",
&bt_woble->woble_setting_radio_off_status_event, 1, bdev->setting_file, FW_CFG_INX_LEN_2);
if (err)
goto LOAD_END;
err = btmtk_load_fw_cfg_setting("RADIOOFF_COMPLETE_EVENT",
&bt_woble->woble_setting_radio_off_comp_event, 1, bdev->setting_file, FW_CFG_INX_LEN_2);
if (err)
goto LOAD_END;
err = btmtk_load_fw_cfg_setting("RADIOON",
&bt_woble->woble_setting_radio_on, 1, bdev->setting_file, FW_CFG_INX_LEN_2);
if (err)
goto LOAD_END;
err = btmtk_load_fw_cfg_setting("RADIOON_STATUS_EVENT",
&bt_woble->woble_setting_radio_on_status_event, 1, bdev->setting_file, FW_CFG_INX_LEN_2);
if (err)
goto LOAD_END;
err = btmtk_load_fw_cfg_setting("RADIOON_COMPLETE_EVENT",
&bt_woble->woble_setting_radio_on_comp_event, 1, bdev->setting_file, FW_CFG_INX_LEN_2);
if (err)
goto LOAD_END;
err = btmtk_load_fw_cfg_setting("APCF_RESUME",
bt_woble->woble_setting_apcf_resume, WOBLE_SETTING_COUNT, bdev->setting_file, FW_CFG_INX_LEN_2);
LOAD_END:
/* release setting file memory */
if (bdev) {
kfree(bdev->setting_file);
bdev->setting_file = NULL;
}
if (err)
BTMTK_ERR("%s: error return %d", __func__, err);
return err;
}
static void btmtk_check_wobx_debug_log(struct btmtk_dev *bdev)
{
/* 0xFF, 0xFF, 0xFF, 0xFF is log level */
u8 cmd[CHECK_WOBX_DEBUG_CMD_LEN] = { 0X01, 0xCE, 0xFC, 0x04, 0xFF, 0xFF, 0xFF, 0xFF };
u8 event[CHECK_WOBX_DEBUG_EVT_HDR_LEN] = { 0x04, 0xE8 };
int ret = -1;
BTMTK_INFO("%s: begin", __func__);
ret = btmtk_main_send_cmd(bdev,
cmd, CHECK_WOBX_DEBUG_CMD_LEN,
event, CHECK_WOBX_DEBUG_EVT_HDR_LEN,
0, 0, BTMTK_TX_PKT_FROM_HOST);
if (ret < 0)
BTMTK_ERR("%s: failed(%d)", __func__, ret);
/* Driver just print event to kernel log in rx_work,
* Please reference wiki to know what it is.
*/
}
static int btmtk_handle_leaving_WoBLE_state(struct btmtk_woble *bt_woble)
{
int ret = -1;
unsigned char fstate = BTMTK_FOPS_STATE_INIT;
struct btmtk_dev *bdev = bt_woble->bdev;
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
BTMTK_INFO("%s: begin", __func__);
#if WAKEUP_BT_IRQ
/* Can't enter woble mode */
BTMTK_INFO("not support woble mode for wakeup bt irq");
return 0;
#endif
fstate = btmtk_fops_get_state(bdev);
if (!bdev->bt_cfg.support_woble_for_bt_disable) {
if (fstate != BTMTK_FOPS_STATE_OPENED) {
BTMTK_WARN("%s: fops is not opened, return", __func__);
return 0;
}
}
if (fstate != BTMTK_FOPS_STATE_OPENED) {
BTMTK_WARN("%s: fops is not open yet(%d), need to start traffic before leaving woble",
__func__, fstate);
/* start traffic to recv event*/
ret = bmain_info->hif_hook.open(bdev->hdev);
if (ret < 0) {
BTMTK_ERR("%s, cif_open failed", __func__);
goto Finish;
}
}
if (is_support_unify_woble(bdev)) {
ret = btmtk_set_Woble_Radio_On(bt_woble);
if (ret < 0)
goto Finish;
ret = btmtk_set_Woble_APCF_Resume(bt_woble);
if (ret < 0)
goto Finish;
} else {
/* radio on cmd with wobx_mode_disable, used when unify woble off */
ret = btmtk_send_woble_resume_cmd(bdev);
}
Finish:
if (ret < 0) {
BTMTK_INFO("%s: woble_resume_fail!!!", __func__);
} else {
/* It's wobx debug log method. */
btmtk_check_wobx_debug_log(bdev);
if (fstate != BTMTK_FOPS_STATE_OPENED) {
ret = btmtk_send_deinit_cmds(bdev);
if (ret < 0) {
BTMTK_ERR("%s, btmtk_send_deinit_cmds failed", __func__);
goto exit;
}
BTMTK_WARN("%s: fops is not open(%d), need to stop traffic after leaving woble",
__func__, fstate);
/* stop traffic to stop recv data from fw*/
ret = bmain_info->hif_hook.close(bdev->hdev);
if (ret < 0) {
BTMTK_ERR("%s, cif_close failed", __func__);
goto exit;
}
} else
bdev->power_state = BTMTK_DONGLE_STATE_POWER_ON;
BTMTK_INFO("%s: success", __func__);
}
exit:
BTMTK_INFO("%s: end", __func__);
return ret;
}
static int btmtk_handle_entering_WoBLE_state(struct btmtk_woble *bt_woble)
{
int ret = -1;
unsigned char fstate = BTMTK_FOPS_STATE_INIT;
int state = BTMTK_STATE_INIT;
struct btmtk_dev *bdev = bt_woble->bdev;
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
BTMTK_INFO("%s: begin", __func__);
#if WAKEUP_BT_IRQ
/* Can't enter woble mode */
BTMTK_INFO("not support woble mode for wakeup bt irq");
return 0;
#endif
fstate = btmtk_fops_get_state(bdev);
if (!bdev->bt_cfg.support_woble_for_bt_disable) {
if (fstate != BTMTK_FOPS_STATE_OPENED) {
BTMTK_WARN("%s: fops is not open yet(%d)!, return", __func__, fstate);
return 0;
}
}
state = btmtk_get_chip_state(bdev);
if (state == BTMTK_STATE_FW_DUMP) {
BTMTK_WARN("%s: FW dumping ongoing, don't send any cmd to FW!!!", __func__);
goto Finish;
}
if (atomic_read(&bmain_info->chip_reset) || atomic_read(&bmain_info->subsys_reset)) {
BTMTK_ERR("%s chip_reset is %d, subsys_reset is %d", __func__,
atomic_read(&bmain_info->chip_reset), atomic_read(&bmain_info->subsys_reset));
goto Finish;
}
/* Power on first if state is power off */
ret = btmtk_reset_power_on(bdev);
if (ret < 0) {
BTMTK_ERR("%s: reset power_on fail return", __func__);
goto Finish;
}
if (fstate != BTMTK_FOPS_STATE_OPENED) {
BTMTK_WARN("%s: fops is not open yet(%d), need to start traffic before enter woble",
__func__, fstate);
/* start traffic to recv event*/
ret = bmain_info->hif_hook.open(bdev->hdev);
if (ret < 0) {
BTMTK_ERR("%s, cif_open failed", __func__);
goto Finish;
}
}
if (is_support_unify_woble(bdev)) {
do {
typedef ssize_t (*func) (u16 u16Key, const char *buf, size_t size);
char *func_name = "MDrv_PM_Write_Key";
func pFunc = NULL;
ssize_t sret = 0;
u8 buf = 0;
pFunc = (func) btmtk_kallsyms_lookup_name(func_name);
if (pFunc && bdev->bt_cfg.unify_woble_type == 1) {
buf = 1;
sret = pFunc(PM_KEY_BTW, &buf, sizeof(u8));
BTMTK_INFO("%s: Invoke %s, buf = %d, sret = %zd", __func__,
func_name, buf, sret);
} else {
BTMTK_WARN("%s: No Exported Func Found [%s]", __func__, func_name);
}
} while (0);
ret = btmtk_send_woble_read_BDADDR_cmd(bdev);
if (ret < 0)
goto STOP_TRAFFIC;
ret = btmtk_set_Woble_APCF(bt_woble);
if (ret < 0)
goto STOP_TRAFFIC;
ret = btmtk_set_Woble_Radio_Off(bt_woble);
if (ret < 0)
goto STOP_TRAFFIC;
} else {
/* radio off cmd with wobx_mode_disable, used when unify woble off */
ret = btmtk_send_woble_suspend_cmd(bdev);
}
STOP_TRAFFIC:
if (fstate != BTMTK_FOPS_STATE_OPENED) {
BTMTK_WARN("%s: fops is not open(%d), need to stop traffic after enter woble",
__func__, fstate);
/* stop traffic to stop recv data from fw*/
ret = bmain_info->hif_hook.close(bdev->hdev);
if (ret < 0) {
BTMTK_ERR("%s, cif_close failed", __func__);
goto Finish;
}
}
Finish:
if (ret) {
bdev->power_state = BTMTK_DONGLE_STATE_ERROR;
btmtk_woble_wake_lock(bdev);
}
BTMTK_INFO("%s: end ret = %d, power_state =%d", __func__, ret, bdev->power_state);
return ret;
}
int btmtk_woble_suspend(struct btmtk_woble *bt_woble)
{
int ret = 0;
unsigned char fstate = BTMTK_FOPS_STATE_INIT;
struct btmtk_dev *bdev = bt_woble->bdev;
BTMTK_INFO("%s: enter", __func__);
if (bdev == NULL) {
BTMTK_WARN("%s: bdev is NULL", __func__);
goto exit;
}
fstate = btmtk_fops_get_state(bdev);
if (!is_support_unify_woble(bdev) && (fstate != BTMTK_FOPS_STATE_OPENED)) {
BTMTK_WARN("%s: when not support woble, in bt off state, do nothing!", __func__);
goto exit;
}
ret = btmtk_handle_entering_WoBLE_state(bt_woble);
if (ret)
BTMTK_ERR("%s: btmtk_handle_entering_WoBLE_state return fail %d", __func__, ret);
exit:
BTMTK_INFO("%s: end", __func__);
return ret;
}
int btmtk_woble_resume(struct btmtk_woble *bt_woble)
{
int ret = 0;
unsigned char fstate = BTMTK_FOPS_STATE_INIT;
struct btmtk_dev *bdev = bt_woble->bdev;
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
BTMTK_INFO("%s: enter", __func__);
fstate = btmtk_fops_get_state(bdev);
if (!is_support_unify_woble(bdev) && (fstate != BTMTK_FOPS_STATE_OPENED)) {
BTMTK_WARN("%s: when not support woble, in bt off state, do nothing!", __func__);
goto exit;
}
if (bdev->power_state == BTMTK_DONGLE_STATE_ERROR) {
BTMTK_INFO("%s: In BTMTK_DONGLE_STATE_ERROR(Could suspend caused), do assert", __func__);
btmtk_send_assert_cmd(bdev);
ret = -EBADFD;
goto exit;
}
ret = btmtk_handle_leaving_WoBLE_state(bt_woble);
if (ret < 0) {
BTMTK_ERR("%s: btmtk_handle_leaving_WoBLE_state return fail %d", __func__, ret);
/* avoid rtc to to suspend again, do FW dump first */
btmtk_woble_wake_lock(bdev);
goto exit;
}
if (bdev->bt_cfg.reset_stack_after_woble
&& bmain_info->reset_stack_flag == HW_ERR_NONE
&& fstate == BTMTK_FOPS_STATE_OPENED)
bmain_info->reset_stack_flag = HW_ERR_CODE_RESET_STACK_AFTER_WOBLE;
btmtk_send_hw_err_to_host(bdev);
BTMTK_INFO("%s: end(%d), reset_stack_flag = %d, fstate = %d", __func__, ret,
bmain_info->reset_stack_flag, fstate);
exit:
BTMTK_INFO("%s: end", __func__);
return ret;
}
static irqreturn_t btmtk_woble_isr(int irq, struct btmtk_woble *bt_woble)
{
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
BTMTK_DBG("%s begin", __func__);
disable_irq_nosync(bt_woble->wobt_irq);
atomic_dec(&(bt_woble->irq_enable_count));
BTMTK_INFO("disable BT IRQ, call wake lock");
__pm_wakeup_event(bmain_info->eint_ws, WAIT_POWERKEY_TIMEOUT);
input_report_key(bt_woble->WoBLEInputDev, KEY_WAKEUP, 1);
input_sync(bt_woble->WoBLEInputDev);
input_report_key(bt_woble->WoBLEInputDev, KEY_WAKEUP, 0);
input_sync(bt_woble->WoBLEInputDev);
BTMTK_DBG("%s end", __func__);
return IRQ_HANDLED;
}
static int btmtk_RegisterBTIrq(struct btmtk_woble *bt_woble)
{
struct device_node *eint_node = NULL;
int interrupts[2];
BTMTK_DBG("%s begin", __func__);
eint_node = of_find_compatible_node(NULL, NULL, "mediatek,woble_eint");
if (eint_node) {
BTMTK_INFO("Get woble_eint compatible node");
bt_woble->wobt_irq = irq_of_parse_and_map(eint_node, 0);
BTMTK_INFO("woble_irq number:%d", bt_woble->wobt_irq);
if (bt_woble->wobt_irq) {
of_property_read_u32_array(eint_node, "interrupts",
interrupts, ARRAY_SIZE(interrupts));
bt_woble->wobt_irqlevel = interrupts[1];
if (request_irq(bt_woble->wobt_irq, (void *)btmtk_woble_isr,
bt_woble->wobt_irqlevel, "woble-eint", bt_woble))
BTMTK_INFO("WOBTIRQ LINE NOT AVAILABLE!!");
else {
BTMTK_INFO("disable BT IRQ");
disable_irq_nosync(bt_woble->wobt_irq);
}
} else
BTMTK_INFO("can't find woble_eint irq");
} else {
bt_woble->wobt_irq = 0;
BTMTK_INFO("can't find woble_eint compatible node");
}
BTMTK_DBG("%s end", __func__);
return 0;
}
static int btmtk_woble_input_init(struct btmtk_woble *bt_woble)
{
int ret = 0;
bt_woble->WoBLEInputDev = input_allocate_device();
if (!bt_woble->WoBLEInputDev || IS_ERR(bt_woble->WoBLEInputDev)) {
BTMTK_ERR("input_allocate_device error");
return -ENOMEM;
}
bt_woble->WoBLEInputDev->name = "WOBLE_INPUT_DEVICE";
bt_woble->WoBLEInputDev->id.bustype = BUS_HOST;
bt_woble->WoBLEInputDev->id.vendor = 0x0002;
bt_woble->WoBLEInputDev->id.product = 0x0002;
bt_woble->WoBLEInputDev->id.version = 0x0002;
__set_bit(EV_KEY, bt_woble->WoBLEInputDev->evbit);
__set_bit(KEY_WAKEUP, bt_woble->WoBLEInputDev->keybit);
ret = input_register_device(bt_woble->WoBLEInputDev);
if (ret < 0) {
input_free_device(bt_woble->WoBLEInputDev);
BTMTK_ERR("input_register_device %d", ret);
return ret;
}
return ret;
}
static void btmtk_woble_input_deinit(struct btmtk_woble *bt_woble)
{
if (bt_woble->WoBLEInputDev) {
input_unregister_device(bt_woble->WoBLEInputDev);
/* Do not need to free WOBLE_INPUT_DEVICE, because after unregister it,
* kernel will free it by itself.
*/
/* input_free_device(bt_woble->WoBLEInputDev); */
bt_woble->WoBLEInputDev = NULL;
}
}
static void btmtk_free_woble_setting_file(struct btmtk_woble *bt_woble)
{
btmtk_free_fw_cfg_struct(bt_woble->woble_setting_apcf, WOBLE_SETTING_COUNT);
btmtk_free_fw_cfg_struct(bt_woble->woble_setting_apcf_fill_mac, WOBLE_SETTING_COUNT);
btmtk_free_fw_cfg_struct(bt_woble->woble_setting_apcf_fill_mac_location, WOBLE_SETTING_COUNT);
btmtk_free_fw_cfg_struct(bt_woble->woble_setting_apcf_resume, WOBLE_SETTING_COUNT);
btmtk_free_fw_cfg_struct(&bt_woble->woble_setting_radio_off, 1);
btmtk_free_fw_cfg_struct(&bt_woble->woble_setting_radio_off_status_event, 1);
btmtk_free_fw_cfg_struct(&bt_woble->woble_setting_radio_off_comp_event, 1);
btmtk_free_fw_cfg_struct(&bt_woble->woble_setting_radio_on, 1);
btmtk_free_fw_cfg_struct(&bt_woble->woble_setting_radio_on_status_event, 1);
btmtk_free_fw_cfg_struct(&bt_woble->woble_setting_radio_on_comp_event, 1);
btmtk_free_fw_cfg_struct(&bt_woble->woble_setting_wakeup_type, 1);
bt_woble->woble_setting_len = 0;
kfree(bt_woble->woble_setting_file_name);
bt_woble->woble_setting_file_name = NULL;
}
int btmtk_woble_initialize(struct btmtk_dev *bdev, struct btmtk_woble *bt_woble)
{
int err = 0;
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
bt_woble->bdev = bdev;
/* Need to add Woble flow */
if (is_support_unify_woble(bdev)) {
if (bt_woble->woble_setting_file_name == NULL) {
bt_woble->woble_setting_file_name = kzalloc(MAX_BIN_FILE_NAME_LEN, GFP_KERNEL);
if (!bt_woble->woble_setting_file_name) {
BTMTK_ERR("%s: alloc memory fail (bt_woble->woble_setting_file_name)", __func__);
err = -1;
goto end;
}
}
(void)snprintf(bt_woble->woble_setting_file_name, MAX_BIN_FILE_NAME_LEN,
"%s_%x.%s", WOBLE_CFG_NAME_PREFIX, bdev->chip_id & 0xffff,
WOBLE_CFG_NAME_SUFFIX);
BTMTK_INFO("%s: woble setting file name is %s", __func__, bt_woble->woble_setting_file_name);
btmtk_load_woble_setting(bt_woble->woble_setting_file_name,
bdev->intf_dev,
&bt_woble->woble_setting_len,
bt_woble);
/* if reset_stack is true, when chip reset is done, we need to power on chip to do
* reset stack
*/
if (bmain_info->reset_stack_flag) {
err = btmtk_reset_power_on(bdev);
if (err < 0) {
BTMTK_ERR("reset power on failed!");
goto err;
}
}
}
if (bdev->bt_cfg.support_woble_by_eint) {
btmtk_woble_input_init(bt_woble);
btmtk_RegisterBTIrq(bt_woble);
}
return 0;
err:
btmtk_free_woble_setting_file(bt_woble);
end:
return err;
}
void btmtk_woble_uninitialize(struct btmtk_woble *bt_woble)
{
struct btmtk_dev *bdev = bt_woble->bdev;
if (bdev == NULL) {
BTMTK_ERR("%s: bdev == NULL", __func__);
return;
}
BTMTK_INFO("%s begin", __func__);
if (bdev->bt_cfg.support_woble_by_eint) {
if (bt_woble->wobt_irq != 0 && atomic_read(&(bt_woble->irq_enable_count)) == 1) {
BTMTK_INFO("disable BT IRQ:%d", bt_woble->wobt_irq);
atomic_dec(&(bt_woble->irq_enable_count));
disable_irq_nosync(bt_woble->wobt_irq);
} else
BTMTK_INFO("irq_enable count:%d", atomic_read(&(bt_woble->irq_enable_count)));
if (bt_woble->wobt_irq) {
free_irq(bt_woble->wobt_irq, bt_woble);
bt_woble->wobt_irq = 0;
}
btmtk_woble_input_deinit(bt_woble);
}
btmtk_free_woble_setting_file(bt_woble);
bt_woble->bdev = NULL;
}

View File

@@ -0,0 +1,87 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2016,2017 MediaTek Inc.
*/
#ifndef __BTMTK_BUFFER_MODE_H__
#define __BTMTK_BUFFER_MODE_H__
#include "btmtk_main.h"
#define BUFFER_MODE_SWITCH_FILE "wifi.cfg"
#define BUFFER_MODE_SWITCH_FIELD "EfuseBufferModeCal"
#define BUFFER_MODE_CFG_FILE "EEPROM_MT%X_1.bin"
#define EFUSE_MODE 0
#define BIN_FILE_MODE 1
#define AUTO_MODE 2
#define SET_ADDRESS_CMD_LEN 10
#define SET_ADDRESS_EVT_LEN 7
#define SET_ADDRESS_CMD_PAYLOAD_OFFSET 4
#define SET_RADIO_CMD_LEN 12
#define SET_RADIO_EVT_LEN 7
#define SET_RADIO_CMD_EDR_DEF_OFFSET 4
#define SET_RADIO_CMD_BLE_OFFSET 8
#define SET_RADIO_CMD_EDR_MAX_OFFSET 9
#define SET_RADIO_CMD_EDR_MODE_OFFSET 11
#define SET_GRP_CMD_LEN 13
#define SET_GRP_EVT_LEN 7
#define SET_GRP_CMD_PAYLOAD_OFFSET 8
#define SET_PWR_OFFSET_CMD_LEN 14
#define SET_PWR_OFFSET_EVT_LEN 7
#define SET_PWR_OFFSET_CMD_PAYLOAD_OFFSET 8
#define BUFFER_MODE_MAC_LENGTH 6
#define BT0_MAC_OFFSET 0x139
#define BT1_MAC_OFFSET 0x13F
#define BUFFER_MODE_RADIO_LENGTH 4
#define BT0_RADIO_OFFSET 0x145
#define BT1_RADIO_OFFSET 0x149
#define BUFFER_MODE_GROUP_LENGTH 5
#define BT0_GROUP_ANT0_OFFSET 0x984
#define BT0_GROUP_ANT1_OFFSET 0x9BE
#define BT1_GROUP_ANT0_OFFSET 0x9A1
#define BT1_GROUP_ANT1_OFFSET 0x9DB
#define BUFFER_MODE_CAL_LENGTH 6
#define BT0_CAL_ANT0_OFFSET 0x96C
#define BT0_CAL_ANT1_OFFSET 0x9A6
#define BT1_CAL_ANT0_OFFSET 0x989
#define BT1_CAL_ANT1_OFFSET 0x9C3
struct btmtk_buffer_mode_radio_struct {
u8 radio_0; /* bit 0-5:edr_init_pwr, 6-7:edr_pwr_mode */
u8 radio_1; /* bit 0-5:edr_max_pwr, 6-7:reserved */
u8 radio_2; /* bit 0-5:ble_default_pwr, 6-7:reserved */
u8 radio_3; /* reserved */
};
struct btmtk_buffer_mode_struct {
struct btmtk_dev *bdev;
unsigned char file_name[MAX_BIN_FILE_NAME_LEN];
int efuse_mode;
u8 bt0_mac[BUFFER_MODE_MAC_LENGTH];
u8 bt1_mac[BUFFER_MODE_MAC_LENGTH];
struct btmtk_buffer_mode_radio_struct bt0_radio;
struct btmtk_buffer_mode_radio_struct bt1_radio;
u8 bt0_ant0_grp_boundary[BUFFER_MODE_GROUP_LENGTH];
u8 bt0_ant1_grp_boundary[BUFFER_MODE_GROUP_LENGTH];
u8 bt1_ant0_grp_boundary[BUFFER_MODE_GROUP_LENGTH];
u8 bt1_ant1_grp_boundary[BUFFER_MODE_GROUP_LENGTH];
u8 bt0_ant0_pwr_offset[BUFFER_MODE_CAL_LENGTH];
u8 bt0_ant1_pwr_offset[BUFFER_MODE_CAL_LENGTH];
u8 bt1_ant0_pwr_offset[BUFFER_MODE_CAL_LENGTH];
u8 bt1_ant1_pwr_offset[BUFFER_MODE_CAL_LENGTH];
};
int btmtk_buffer_mode_send(struct btmtk_buffer_mode_struct *buffer_mode);
void btmtk_buffer_mode_initialize(struct btmtk_dev *bdev, struct btmtk_buffer_mode_struct **buffer_mode);
#endif /* __BTMTK_BUFFER_MODE_H__ */

View File

@@ -0,0 +1,21 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2018 MediaTek Inc.
*/
#ifndef __BTMTK_CHIP_IF_H__
#define __BTMTK_CHIP_IF_H__
#ifdef CHIP_IF_USB
#include "btmtk_usb.h"
#elif defined(CHIP_IF_SDIO)
#include "btmtk_sdio.h"
#elif defined(CHIP_IF_UART)
#include "btmtk_uart.h"
#elif defined(CHIP_IF_BTIF)
#include "btmtk_btif.h"
#endif
int btmtk_cif_register(void);
int btmtk_cif_deregister(void);
#endif /* __BTMTK_CHIP_IF_H__ */

View File

@@ -0,0 +1,24 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2018 MediaTek Inc.
*/
#ifndef __BTMTK_CHIP_RESET_H__
#define __BTMTK_CHIP_RESET_H__
#include <linux/version.h>
#include <linux/timer.h>
#include "btmtk_define.h"
#include "btmtk_main.h"
#include "btmtk_woble.h"
#define CHIP_RESET_TIMEOUT 20
void btmtk_reset_timer_add(struct btmtk_dev *bdev);
void btmtk_reset_timer_update(struct btmtk_dev *bdev);
void btmtk_reset_timer_del(struct btmtk_dev *bdev);
void btmtk_reset_trigger(struct btmtk_dev *bdev);
void btmtk_reset_waker(struct work_struct *work);
#endif /* __BTMTK_WOBLE_H__ */

View File

@@ -0,0 +1,381 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2016,2017 MediaTek Inc.
*/
#ifndef __BTMTK_DEFINE_H__
#define __BTMTK_DEFINE_H__
#include <linux/version.h>
#include <linux/firmware.h>
#include <linux/slab.h>
#include <linux/module.h>
#include <net/bluetooth/bluetooth.h>
#include <net/bluetooth/hci_core.h>
#include <linux/cdev.h>
#include <linux/spinlock.h>
#include <linux/kallsyms.h>
#include <linux/device.h>
#include <asm/unaligned.h>
/* Define for proce node */
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
/* Define for whole chip reset */
#include <linux/of.h>
#include <linux/of_gpio.h>
#include <linux/kthread.h>
#include <linux/freezer.h>
#include <linux/vmalloc.h>
#include <linux/rtc.h>
/** Driver version */
#define VERSION "7.0.2022031401"
#define SUBVER ":turnkey"
#ifdef CFG_SUPPORT_WAKEUP_IRQ
#define WAKEUP_BT_IRQ 1
#else
#define WAKEUP_BT_IRQ 0
#endif
#define ENABLESTP FALSE
#define BTMTKUART_TX_STATE_ACTIVE 1
#define BTMTKUART_TX_STATE_WAKEUP 2
#define BTMTK_TX_WAIT_VND_EVT 3
#define BTMTKUART_REQUIRED_WAKEUP 4
#define BTMTKUART_REQUIRED_DOWNLOAD 5
#define BTMTK_TX_SKIP_VENDOR_EVT 6
#define BTMTKUART_RX_STATE_ACTIVE 1
#define BTMTKUART_RX_STATE_WAKEUP 2
#define BTMTKUART_RX_STATE_RESET 3
/**
* Maximum rom patch file name length
*/
#define MAX_BIN_FILE_NAME_LEN 64
/**
* Type definition
*/
#ifndef TRUE
#define TRUE 1
#endif
#ifndef FALSE
#define FALSE 0
#endif
#ifndef UNUSED
#define UNUSED(x) (void)(x)
#endif
#ifndef ALIGN_4
#define ALIGN_4(_value) (((_value) + 3) & ~3u)
#endif /* ALIGN_4 */
#ifndef ALIGN_8
#define ALIGN_8(_value) (((_value) + 7) & ~7u)
#endif /* ALIGN_4 */
/* This macro check the DW alignment of the input value.
* _value - value of address need to check
*/
#ifndef IS_ALIGN_4
#define IS_ALIGN_4(_value) (((_value) & 0x3) ? FALSE : TRUE)
#endif /* IS_ALIGN_4 */
#ifndef IS_NOT_ALIGN_4
#define IS_NOT_ALIGN_4(_value) (((_value) & 0x3) ? TRUE : FALSE)
#endif /* IS_NOT_ALIGN_4 */
#define MIN(a, b) (((a) < (b)) ? (a) : (b))
#define MAX(a, b) (((a) > (b)) ? (a) : (b))
/**
* Log and level definition
*/
#define BTMTK_LOG_LVL_ERR 1
#define BTMTK_LOG_LVL_WARN 2
#define BTMTK_LOG_LVL_INFO 3
#define BTMTK_LOG_LVL_DBG 4
#define BTMTK_LOG_LVL_MAX BTMTK_LOG_LVL_DBG
#define BTMTK_LOG_LVL_DEF BTMTK_LOG_LVL_INFO /* default setting */
#define HCI_SNOOP_ENTRY_NUM 30
#define HCI_SNOOP_BUF_SIZE 32
#define HCI_SNOOP_MAX_BUF_SIZE 66
#define HCI_SNOOP_TS_STR_LEN 24
#define WMT_OVER_HCI_HEADER_SIZE 3
#define READ_ISO_PACKET_CMD_SIZE 4
extern uint8_t btmtk_log_lvl;
#define BTMTK_ERR(fmt, ...) \
do { if (btmtk_log_lvl >= BTMTK_LOG_LVL_ERR) pr_info("[btmtk_err] ***"fmt"***\n", ##__VA_ARGS__); } while (0)
#define BTMTK_WARN(fmt, ...) \
do { if (btmtk_log_lvl >= BTMTK_LOG_LVL_WARN) pr_info("[btmtk_warn] "fmt"\n", ##__VA_ARGS__); } while (0)
#define BTMTK_INFO(fmt, ...) \
do { if (btmtk_log_lvl >= BTMTK_LOG_LVL_INFO) pr_info("[btmtk_info] "fmt"\n", ##__VA_ARGS__); } while (0)
#define BTMTK_DBG(fmt, ...) \
do { if (btmtk_log_lvl >= BTMTK_LOG_LVL_DBG) pr_info("[btmtk_dbg] "fmt"\n", ##__VA_ARGS__); } while (0)
#define BTMTK_WARN_LIMITTED(fmt, ...) \
do { \
if (btmtk_log_lvl >= BTMTK_LOG_LVL_WARN) \
pr_info("[btmtk_warn_limit] "fmt"\n", ##__VA_ARGS__); \
} while (0)
#define BTMTK_INFO_RAW(p, l, fmt, ...) \
do { \
if (btmtk_log_lvl >= BTMTK_LOG_LVL_INFO) { \
int cnt_ = 0; \
int len_ = (l <= HCI_SNOOP_MAX_BUF_SIZE ? l : HCI_SNOOP_MAX_BUF_SIZE); \
uint8_t raw_buf[HCI_SNOOP_MAX_BUF_SIZE * 5 + 10]; \
const unsigned char *ptr = p; \
for (cnt_ = 0; cnt_ < len_; ++cnt_) \
(void)snprintf(raw_buf+5*cnt_, 6, "0x%02X ", ptr[cnt_]); \
raw_buf[5*cnt_] = '\0'; \
if (l <= HCI_SNOOP_MAX_BUF_SIZE) { \
pr_cont("[btmtk_info] "fmt"%s\n", ##__VA_ARGS__, raw_buf); \
} else { \
pr_cont("[btmtk_info] "fmt"%s (prtail)\n", ##__VA_ARGS__, raw_buf); \
} \
} \
} while (0)
#define BTMTK_DBG_RAW(p, l, fmt, ...) \
do { \
if (btmtk_log_lvl >= BTMTK_LOG_LVL_DBG) { \
int cnt_ = 0; \
int len_ = (l <= HCI_SNOOP_MAX_BUF_SIZE ? l : HCI_SNOOP_MAX_BUF_SIZE); \
uint8_t raw_buf[HCI_SNOOP_MAX_BUF_SIZE * 5 + 10]; \
const unsigned char *ptr = p; \
for (cnt_ = 0; cnt_ < len_; ++cnt_) \
(void)snprintf(raw_buf+5*cnt_, 6, "0x%02X ", ptr[cnt_]); \
raw_buf[5*cnt_] = '\0'; \
if (l <= HCI_SNOOP_MAX_BUF_SIZE) { \
pr_cont("[btmtk_debug] "fmt"%s\n", ##__VA_ARGS__, raw_buf); \
} else { \
pr_cont("[btmtk_debug] "fmt"%s (prtail)\n", ##__VA_ARGS__, raw_buf); \
} \
} \
} while (0)
#define BTMTK_CIF_IS_NULL(bdev, cif_event) \
(!bdev || !(&bdev->cif_state[cif_event]))
/**
*
* HCI packet type
*/
#define MTK_HCI_COMMAND_PKT 0x01
#define MTK_HCI_ACLDATA_PKT 0x02
#define MTK_HCI_SCODATA_PKT 0x03
#define MTK_HCI_EVENT_PKT 0x04
#define HCI_ISO_PKT 0x05
#define HCI_ISO_PKT_HEADER_SIZE 4
#define HCI_ISO_PKT_WITH_ACL_HEADER_SIZE 5
/**
* ROM patch related
*/
#define PATCH_HCI_HEADER_SIZE 4
#define PATCH_WMT_HEADER_SIZE 5
/*
* Enable STP
* HCI+WMT+STP = 4 + 5 + 1(phase) +(4=STP_HEADER + 2=CRC)
#define PATCH_HEADER_SIZE 16
*/
/*#ifdef ENABLESTP
* #define PATCH_HEADER_SIZE (PATCH_HCI_HEADER_SIZE + PATCH_WMT_HEADER_SIZE + 1 + 6)
* #define UPLOAD_PATCH_UNIT 916
* #define PATCH_INFO_SIZE 30
*#else
*/
#define PATCH_HEADER_SIZE (PATCH_HCI_HEADER_SIZE + PATCH_WMT_HEADER_SIZE + 1)
/* TODO, If usb use 901 patch unit size, download patch will timeout
* because the timeout has been set to 1s
*/
#define UPLOAD_PATCH_UNIT 1988
#define PATCH_INFO_SIZE 30
/*#endif*/
#define PATCH_PHASE1 1
#define PATCH_PHASE2 2
#define PATCH_PHASE3 3
/* It is for mt7961 download rom patch*/
#define FW_ROM_PATCH_HEADER_SIZE 32
#define FW_ROM_PATCH_GD_SIZE 64
#define FW_ROM_PATCH_SEC_MAP_SIZE 64
#define SEC_MAP_NEED_SEND_SIZE 52
#define PATCH_STATUS 7
#define IO_BUF_SIZE (HCI_MAX_EVENT_SIZE > 256 ? HCI_MAX_EVENT_SIZE : 256)
#define EVENT_COMPARE_SIZE 64
#define SECTION_SPEC_NUM 13
#define BD_ADDRESS_SIZE 6
#define PHASE1_WMT_CMD_COUNT 255
#define VENDOR_CMD_COUNT 255
#define BT_CFG_NAME "bt.cfg"
#define BT_CFG_NAME_PREFIX "bt_mt"
#define BT_CFG_NAME_SUFFIX "cfg"
#define WOBLE_CFG_NAME_PREFIX "woble_setting"
#define WOBLE_CFG_NAME_SUFFIX "bin"
#define BT_UNIFY_WOBLE "SUPPORT_UNIFY_WOBLE"
#define BT_UNIFY_WOBLE_TYPE "UNIFY_WOBLE_TYPE"
#define BT_WOBLE_BY_EINT "SUPPORT_WOBLE_BY_EINT"
#define BT_DONGLE_RESET_PIN "BT_DONGLE_RESET_GPIO_PIN"
#define BT_RESET_DONGLE "SUPPORT_DONGLE_RESET"
#define BT_FULL_FW_DUMP "SUPPORT_FULL_FW_DUMP"
#define BT_WOBLE_WAKELOCK "SUPPORT_WOBLE_WAKELOCK"
#define BT_WOBLE_FOR_BT_DISABLE "SUPPORT_WOBLE_FOR_BT_DISABLE"
#define BT_RESET_STACK_AFTER_WOBLE "RESET_STACK_AFTER_WOBLE"
#define BT_AUTO_PICUS "SUPPORT_AUTO_PICUS"
#define BT_AUTO_PICUS_FILTER "PICUS_FILTER_COMMAND"
#define BT_AUTO_PICUS_ENABLE "PICUS_ENABLE_COMMAND"
#define BT_PICUS_TO_HOST "SUPPORT_PICUS_TO_HOST"
#define BT_PHASE1_WMT_CMD "PHASE1_WMT_CMD"
#define BT_VENDOR_CMD "VENDOR_CMD"
#define BT_SINGLE_SKU "SUPPORT_BT_SINGLE_SKU"
#define BT_AUDIO_SET "SUPPORT_BT_AUDIO_SETTING"
#define BT_AUDIO_ENABLE_CMD "AUDIO_ENABLE_CMD"
#define BT_AUDIO_PINMUX_NUM "AUDIO_PINMUX_NUM"
#define BT_AUDIO_PINMUX_MODE "AUDIO_PINMUX_MODE"
#define PM_KEY_BTW (0x0015) /* Notify PM the unify woble type */
#define BTMTK_RESET_DOING 1
#define BTMTK_RESET_DONE 0
#define BTMTK_MAX_SUBSYS_RESET_COUNT 3
/**
* Disable RESUME_RESUME
*/
#ifndef BT_DISABLE_RESET_RESUME
#define BT_DISABLE_RESET_RESUME 0
#endif
enum fw_cfg_index_len {
FW_CFG_INX_LEN_NONE = 0,
FW_CFG_INX_LEN_2 = 2,
FW_CFG_INX_LEN_3 = 3,
};
struct fw_cfg_struct {
char *content; /* APCF content or radio off content */
u32 length; /* APCF content or radio off content of length */
};
struct bt_cfg_struct {
bool support_unify_woble; /* support unify woble or not */
bool support_woble_by_eint; /* support woble by eint or not */
bool support_dongle_reset; /* support chip reset or not */
bool support_full_fw_dump; /* dump full fw coredump or not */
bool support_woble_wakelock; /* support when woble error, do wakelock or not */
bool support_woble_for_bt_disable; /* when bt disable, support enter susend or not */
bool reset_stack_after_woble; /* support reset stack to re-connect IOT after resume */
bool support_auto_picus; /* support enable PICUS automatically */
struct fw_cfg_struct picus_filter; /* support on PICUS filter command customization */
struct fw_cfg_struct picus_enable; /* support on PICUS enable command customization */
bool support_picus_to_host; /* support picus log to host (boots/bluedroid) */
int dongle_reset_gpio_pin; /* BT_DONGLE_RESET_GPIO_PIN number */
unsigned int unify_woble_type; /* 0: legacy. 1: waveform. 2: IR */
struct fw_cfg_struct phase1_wmt_cmd[PHASE1_WMT_CMD_COUNT];
struct fw_cfg_struct vendor_cmd[VENDOR_CMD_COUNT];
bool support_bt_single_sku;
bool support_audio_setting; /* support audio set pinmux */
struct fw_cfg_struct audio_cmd; /* support on audio enable command customization */
struct fw_cfg_struct audio_pinmux_num; /* support on set audio pinmux num command customization */
struct fw_cfg_struct audio_pinmux_mode; /* support on set audio pinmux mode command customization */
};
struct bt_utc_struct {
struct rtc_time tm;
u32 usec;
};
#define BT_DOWNLOAD 1
#define WIFI_DOWNLOAD 2
#define ZB_DOWNLOAD 3
enum debug_reg_index_len {
DEBUG_REG_INX_LEN_NONE = 0,
DEBUG_REG_INX_LEN_2 = 2,
DEBUG_REG_INX_LEN_3 = 3,
};
#define DEBUG_REG_SIZE 10
#define DEBUG_REG_NUM 10
struct debug_reg {
u32 *content;
u32 length;
};
struct debug_reg_struct {
struct debug_reg *reg;
u32 num;
};
#define SWAP32(x) \
((u32) (\
(((u32) (x) & (u32) 0x000000ffUL) << 24) | \
(((u32) (x) & (u32) 0x0000ff00UL) << 8) | \
(((u32) (x) & (u32) 0x00ff0000UL) >> 8) | \
(((u32) (x) & (u32) 0xff000000UL) >> 24)))
/* Endian byte swapping codes */
#ifdef __LITTLE_ENDIAN
#define cpu2le32(x) ((uint32_t)(x))
#define le2cpu32(x) ((uint32_t)(x))
#define cpu2be32(x) SWAP32((x))
#define be2cpu32(x) SWAP32((x))
#else
#define cpu2le32(x) SWAP32((x))
#define le2cpu32(x) SWAP32((x))
#define cpu2be32(x) ((uint32_t)(x))
#define be2cpu32(x) ((uint32_t)(x))
#endif
#define FW_VERSION 0x80021004
#define CHIP_ID 0x70010200
#define FLAVOR 0x70010020
#ifndef DEBUG_LD_PATCH_TIME
#define DEBUG_LD_PATCH_TIME 0
#endif
#ifndef DEBUG_DUMP_TIME
#define DEBUG_DUMP_TIME 0
#endif
#define ERRNUM 0xFF
#if DEBUG_DUMP_TIME
void btmtk_getUTCtime(struct bt_utc_struct *utc);
#define DUMP_TIME_STAMP(__str) \
do { \
struct bt_utc_struct utc; \
btmtk_getUTCtime(&utc); \
BTMTK_INFO("%s:%d, %s - DUMP_TIME_STAMP UTC: %d-%02d-%02d %02d:%02d:%02d.%06u", \
__func__, __LINE__, __str, \
utc.tm.tm_year, utc.tm.tm_mon, utc.tm.tm_mday, \
utc.tm.tm_hour, utc.tm.tm_min, utc.tm.tm_sec, utc.usec); \
} while (0)
#else
#define DUMP_TIME_STAMP(__str)
#endif
#endif /* __BTMTK_DEFINE_H__ */

View File

@@ -0,0 +1,145 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2016,2017 MediaTek Inc.
*/
#ifndef _BTMTK_DRV_H_
#define _BTMTK_DRV_H_
#include <linux/kthread.h>
#include <linux/bitops.h>
#include <linux/slab.h>
#include <net/bluetooth/bluetooth.h>
#define SAVE_FW_DUMP_IN_KERNEL 1
#define SUPPORT_FW_DUMP 1
#define BTM_HEADER_LEN 5
#define BTM_UPLD_SIZE 2312
#define MTK_TXDATA_SIZE 2000
#define MTK_RXDATA_SIZE 2000
/* Time to wait until Host Sleep state change in millisecond */
#define WAIT_UNTIL_HS_STATE_CHANGED msecs_to_jiffies(5000)
/* Time to wait for command response in millisecond */
#define WAIT_UNTIL_CMD_RESP msecs_to_jiffies(5000)
enum rdwr_status {
RDWR_STATUS_SUCCESS = 0,
RDWR_STATUS_FAILURE = 1,
RDWR_STATUS_DONE = 2
};
#define FW_DUMP_MAX_NAME_LEN 8
#define FW_DUMP_HOST_READY 0xEE
#define FW_DUMP_DONE 0xFF
#define FW_DUMP_READ_DONE 0xFE
struct memory_type_mapping {
u8 mem_name[FW_DUMP_MAX_NAME_LEN];
u8 *mem_ptr;
u32 mem_size;
u8 done_flag;
};
#define MTK_VENDOR_PKT 0xFE
/* Vendor specific Bluetooth commands */
#define BT_CMD_PSCAN_WIN_REPORT_ENABLE 0xFC03
#define BT_CMD_ROUTE_SCO_TO_HOST 0xFC1D
#define BT_CMD_SET_BDADDR 0xFC22
#define BT_CMD_AUTO_SLEEP_MODE 0xFC23
#define BT_CMD_HOST_SLEEP_CONFIG 0xFC59
#define BT_CMD_HOST_SLEEP_ENABLE 0xFC5A
#define BT_CMD_MODULE_CFG_REQ 0xFC5B
#define BT_CMD_LOAD_CONFIG_DATA 0xFC61
/* Sub-commands: Module Bringup/Shutdown Request/Response */
#define MODULE_BRINGUP_REQ 0xF1
#define MODULE_BROUGHT_UP 0x00
#define MODULE_ALREADY_UP 0x0C
#define MODULE_SHUTDOWN_REQ 0xF2
/* Vendor specific Bluetooth events */
#define BT_EVENT_AUTO_SLEEP_MODE 0x23
#define BT_EVENT_HOST_SLEEP_CONFIG 0x59
#define BT_EVENT_HOST_SLEEP_ENABLE 0x5A
#define BT_EVENT_MODULE_CFG_REQ 0x5B
#define BT_EVENT_POWER_STATE 0x20
/* Bluetooth Power States */
#define BT_PS_ENABLE 0x02
#define BT_PS_DISABLE 0x03
#define BT_PS_SLEEP 0x01
/* Host Sleep states */
#define HS_ACTIVATED 0x01
#define HS_DEACTIVATED 0x00
/* Power Save modes */
#define PS_SLEEP 0x01
#define PS_AWAKE 0x00
#define BT_CAL_HDR_LEN 4
#define BT_CAL_DATA_SIZE 28
#define FW_DUMP_BUF_SIZE (1024*512)
#define FW_DUMP_FILE_NAME_SIZE 64
/* #define SAVE_FW_DUMP_IN_KERNEL 1 */
/* stpbt device node */
#define BT_NODE "stpbt"
#define BT_DRIVER_NAME "BT_chrdev"
struct btmtk_event {
u8 ec; /* event counter */
u8 length;
u8 data[4];
} __packed;
/* Prototype of global function */
struct btmtk_private *btmtk_add_card(void *card);
int btmtk_remove_card(struct btmtk_private *priv);
void btmtk_interrupt(struct btmtk_private *priv);
bool btmtk_check_evtpkt(struct btmtk_private *priv, struct sk_buff *skb);
int btmtk_process_event(struct btmtk_private *priv, struct sk_buff *skb);
int btmtk_send_module_cfg_cmd(struct btmtk_private *priv, u8 subcmd);
int btmtk_pscan_window_reporting(struct btmtk_private *priv, u8 subcmd);
int btmtk_send_hscfg_cmd(struct btmtk_private *priv);
int btmtk_enable_ps(struct btmtk_private *priv);
int btmtk_prepare_command(struct btmtk_private *priv);
int btmtk_enable_hs(struct btmtk_private *priv);
void btmtk_firmware_dump(struct btmtk_private *priv);
#define META_BUFFER_SIZE (1024*50)
struct _OSAL_UNSLEEPABLE_LOCK_ {
spinlock_t lock;
unsigned long flag;
};
struct ring_buffer {
struct _OSAL_UNSLEEPABLE_LOCK_ spin_lock;
u8 buffer[META_BUFFER_SIZE]; /* MTKSTP_BUFFER_SIZE:1024 */
u32 read_p; /* indicate the current read index */
u32 write_p; /* indicate the current write index */
};
#ifdef CONFIG_DEBUG_FS
#define FW_DUMP_END_EVENT "coredump end"
#endif
#endif

View File

@@ -0,0 +1,63 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2016,2017 MediaTek Inc.
*/
#ifndef __BTMTK_FW_LOG_H__
#define __BTMTK_FW_LOG_H__
#include "btmtk_main.h"
#include "btmtk_chip_reset.h"
#define BT_FWLOG_IOC_MAGIC (0xfc)
#define BT_FWLOG_IOC_ON_OFF _IOW(BT_FWLOG_IOC_MAGIC, 0, int)
#define BT_FWLOG_IOC_SET_LEVEL _IOW(BT_FWLOG_IOC_MAGIC, 1, int)
#define BT_FWLOG_IOC_GET_LEVEL _IOW(BT_FWLOG_IOC_MAGIC, 2, int)
#define BT_FWLOG_OFF 0x00
#define BT_FWLOG_ON 0xFF
#define DRV_RETURN_SPECIFIC_HCE_ONLY 1 /* Currently only allow 0xFC26 */
#define KPI_WITHOUT_TYPE 0 /* bluetooth kpi */
#ifdef STATIC_REGISTER_FWLOG_NODE
#define FIXED_STPBT_MAJOR_DEV_ID 111
#endif
/* Device node */
#if CFG_SUPPORT_MULTI_DEV_NODE
#define BT_FWLOG_DEV_NODE "stpbt_multi_fwlog"
#else
#define BT_FWLOG_DEV_NODE "stpbtfwlog"
#endif
#define PROC_ROOT_DIR "stpbt"
#define PROC_BT_CHIP_RESET_COUNT "bt_chip_reset_count"
struct btmtk_fops_fwlog {
dev_t g_devIDfwlog;
struct cdev BT_cdevfwlog;
wait_queue_head_t fw_log_inq;
struct sk_buff_head fwlog_queue;
struct class *pBTClass;
struct device *pBTDevfwlog;
spinlock_t fwlog_lock;
u8 btmtk_bluetooth_kpi;
struct sk_buff_head usr_opcode_queue;
};
int btmtk_fops_initfwlog(void);
int btmtk_fops_exitfwlog(void);
void fw_log_bt_event_cb(void);
void fw_log_bt_state_cb(uint8_t state);
/** file_operations: stpbtfwlog */
int btmtk_fops_openfwlog(struct inode *inode, struct file *file);
int btmtk_fops_closefwlog(struct inode *inode, struct file *file);
ssize_t btmtk_fops_readfwlog(struct file *filp, char __user *buf, size_t count, loff_t *f_pos);
ssize_t btmtk_fops_writefwlog(struct file *filp, const char __user *buf, size_t count, loff_t *f_pos);
unsigned int btmtk_fops_pollfwlog(struct file *filp, poll_table *wait);
long btmtk_fops_unlocked_ioctlfwlog(struct file *filp, unsigned int cmd, unsigned long arg);
long btmtk_fops_compat_ioctlfwlog(struct file *filp, unsigned int cmd, unsigned long arg);
int btmtk_dispatch_fwlog(struct btmtk_dev *bdev, struct sk_buff *skb);
int btmtk_dispatch_fwlog_bluetooth_kpi(struct btmtk_dev *bdev, u8 *buf, int len, u8 type);
#endif /* __BTMTK_FW_LOG_H__ */

View File

@@ -0,0 +1,808 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2018 MediaTek Inc.
*/
#ifndef __BTMTK_MAIN_H__
#define __BTMTK_MAIN_H__
#include "btmtk_define.h"
#define DEFAULT_COUNTRY_TABLE_NAME "btPowerTable.dat"
#ifdef CHIP_IF_USB
#define DEFAULT_DEBUG_SOP_NAME "usb_debug"
#elif defined(CHIP_IF_SDIO)
#define DEFAULT_DEBUG_SOP_NAME "sdio_debug"
#endif
//static inline struct sk_buff *mtk_add_stp(struct btmtk_dev *bdev, struct sk_buff *skb);
#define hci_dev_test_and_clear_flag(hdev, nr) test_and_clear_bit((nr), (hdev)->dev_flags)
/* h4_recv */
#define hci_skb_pkt_type(skb) bt_cb((skb))->pkt_type
#define hci_skb_expect(skb) bt_cb((skb))->expect
#define hci_skb_opcode(skb) bt_cb((skb))->hci.opcode
/* HCI bus types */
#define HCI_VIRTUAL 0
#define HCI_USB 1
#define HCI_PCCARD 2
#define HCI_UART 3
#define HCI_RS232 4
#define HCI_PCI 5
#define HCI_SDIO 6
#define HCI_SPI 7
#define HCI_I2C 8
#define HCI_SMD 9
#define HCI_TYPE_SIZE 1
/* this for 7663 need download patch staus
* 0:
* patch download is not complete/BT get patch semaphore fail (WiFi get semaphore success)
* 1:
* patch download is complete
* 2:
* patch download is not complete/BT get patch semaphore success
*/
#define MT766X_PATCH_IS_DOWNLOAD_BY_OTHER 0
#define MT766X_PATCH_READY 1
#define MT766X_PATCH_NEED_DOWNLOAD 2
/* this for 79XX need download patch staus
* 0:
* patch download is not complete, BT driver need to download patch
* 1:
* patch is downloading by Wifi,BT driver need to retry until status = PATCH_READY
* 2:
* patch download is complete, BT driver no need to download patch
*/
#define PATCH_ERR -1
#define PATCH_NEED_DOWNLOAD 0
#define PATCH_IS_DOWNLOAD_BY_OTHER 1
#define PATCH_READY 2
/* 0:
* using legacy wmt cmd/evt to download fw patch, usb/sdio just support 0 now
* 1:
* using DMA to download fw patch
*/
#define PATCH_DOWNLOAD_USING_WMT 0
#define PATCH_DOWNLOAD_USING_DMA 1
#define PATCH_DOWNLOAD_PHASE1_2_DELAY_TIME 1
#define PATCH_DOWNLOAD_PHASE1_2_RETRY 5
#define PATCH_DOWNLOAD_PHASE3_DELAY_TIME 20
#define PATCH_DOWNLOAD_PHASE3_RETRY 20
#define PATCH_DOWNLOAD_PHASE3_SECURE_BOOT_DELAY_TIME 200
#define TIME_MULTIPL 1000
#define TIME_US_OFFSET_RANGE 2000
/* * delay and retrey for main_send_cmd */
#define WMT_DELAY_TIMES 100
#define DELAY_TIMES 20
#define RETRY_TIMES 20
/* Expected minimum supported interface */
#define BT_MCU_MINIMUM_INTERFACE_NUM 4
/* Bus event */
#define HIF_EVENT_PROBE 0
#define HIF_EVENT_DISCONNECT 1
#define HIF_EVENT_SUSPEND 2
#define HIF_EVENT_RESUME 3
#define HIF_EVENT_STANDBY 4
#define HIF_EVENT_SUBSYS_RESET 5
#define HIF_EVENT_WHOLE_CHIP_RESET 6
#define HIF_EVENT_FW_DUMP 7
#define CHAR2HEX_SIZE 4
/**
* For chip reset pin
*/
#define RESET_PIN_SET_LOW_TIME 100
/* stpbtfwlog setting */
#define FWLOG_QUEUE_COUNT (400 * BT_MCU_MINIMUM_INTERFACE_NUM)
#define FWLOG_ASSERT_QUEUE_COUNT 45000
#define FWLOG_BLUETOOTH_KPI_QUEUE_COUNT 400
#define HCI_MAX_COMMAND_SIZE 255
#define HCI_MAX_COMMAND_BUF_SIZE (HCI_MAX_COMMAND_SIZE * 3)
#ifndef HCI_MAX_ISO_SIZE
#define HCI_MAX_ISO_SIZE 340
#endif
/* fwlog information define */
#define FWLOG_TYPE 0xF0
#define FWLOG_LEN_SIZE 2
#define FWLOG_TL_SIZE (HCI_TYPE_SIZE + FWLOG_LEN_SIZE)
#define FWLOG_ATTR_TYPE_LEN 1
#define FWLOG_ATTR_LEN_LEN 1
#define FWLOG_ATTR_RX_LEN_LEN 2
#define FWLOG_ATTR_TL_SIZE (FWLOG_ATTR_TYPE_LEN + FWLOG_ATTR_LEN_LEN)
#define FWLOG_HCI_IDX 0x00
#define FWLOG_DONGLE_IDX 0x01
#define FWLOG_TX 0x10
#define FWLOG_RX 0x11
/* total fwlog info len */
#define FWLOG_PRSV_LEN 32
#define COUNTRY_CODE_LEN 2
#define EDR_MIN -32
#define EDR_MAX 20
#define EDR_MIN_LV9 13
#define BLE_MIN -29
#define BLE_MAX 20
#define EDR_MIN_R1 -64
#define EDR_MAX_R1 40
#define EDR_MIN_LV9_R1 26
#define BLE_MIN_R1 -58
#define BLE_MAX_R1 40
#define EDR_MIN_R2 -128
#define EDR_MAX_R2 80
#define EDR_MIN_LV9_R2 52
#define BLE_MIN_R2 -116
#define BLE_MAX_R2 80
#define ERR_PWR -9999
#define WAIT_POWERKEY_TIMEOUT 5000
#define SEPARATOR_LEN 2
#define STP_CRC_LEN 2
#define TEMP_LEN 260
#define SEARCH_LEN 32
#define TEXT_LEN 128
#define DUAL_BT_FLAG (0x1 << 5)
/* CMD&Event sent by driver */
#define READ_EFUSE_CMD_LEN 18
#define READ_EFUSE_EVT_HDR_LEN 9
#define READ_EFUSE_CMD_BLOCK_OFFSET 10
#define CHECK_LD_PATCH_CMD_LEN 9
#define CHECK_LD_PATCH_EVT_HDR_LEN 7
#define CHECK_LD_PATCH_EVT_RESULT_OFFSET 6 /* need confirm later */
#define HWERR_EVT_LEN 4
#define LD_PATCH_EVT_LEN 8
#define HCI_RESET_CMD_LEN 4
#define HCI_RESET_EVT_LEN 7
#define WMT_RESET_CMD_LEN 9
#define WMT_RESET_EVT_LEN 8
#define WMT_POWER_ON_CMD_LEN 10
#define WMT_POWER_ON_EVT_HDR_LEN 7
#define WMT_POWER_ON_EVT_RESULT_OFFSET 7
#define WMT_POWER_OFF_CMD_LEN 10
#define WMT_POWER_OFF_EVT_HDR_LEN 7
#define WMT_POWER_OFF_EVT_RESULT_OFFSET 7
#define PICUS_ENABLE_CMD_LEN 8
#define PICUS_ENABLE_EVT_HDR_LEN 9
#define PICUS_DISABLE_CMD_LEN 8
#define PICUS_DISABLE_EVT_HDR_LEN 9
#define RES_APCF_CMD_LEN 9
#define RES_APCF_EVT_LEN 5
#define READ_ADDRESS_CMD_LEN 4
#define READ_ADDRESS_EVT_HDR_LEN 7
#define WOBLE_ENABLE_DEFAULT_CMD_LEN 40
#define WOBLE_ENABLE_DEFAULT_EVT_LEN 5
#define WOBLE_DISABLE_DEFAULT_CMD_LEN 9
#define WOBLE_DISABLE_DEFAULT_EVT_LEN 5
#define RADIO_OFF_CMD_LEN 9
#define RADIO_OFF_EVT_LEN 5
#define RADIO_ON_CMD_LEN 9
#define RADIO_ON_EVT_LEN 5
#define APCF_FILTER_CMD_LEN 14
#define APCF_FILTER_EVT_HDR_LEN 8
#define APCF_CMD_LEN 43
#define APCF_EVT_HDR_LEN 7
#define APCF_DELETE_CMD_LEN 7
#define APCF_DELETE_EVT_HDR_LEN 8
#define APCF_RESUME_EVT_HDR_LEN 7
#define CHECK_WOBX_DEBUG_CMD_LEN 8
#define CHECK_WOBX_DEBUG_EVT_HDR_LEN 2
#define SET_STP_CMD_LEN 13
#define SET_STP_EVT_LEN 9
#define SET_STP1_CMD_LEN 16
#define SET_STP1_EVT_LEN 19
#define SET_SLEEP_CMD_LEN 11
#define SET_SLEEP_EVT_LEN 7
#define EVT_HDR_LEN 2
#define ASSERT_CMD_LEN 9
#define TXPOWER_CMD_LEN 16
#define TXPOWER_EVT_LEN 7
#define FW_COREDUMP_CMD_LEN 4
#define HCI_RESET_CMD_LEN 4
#define READ_ISO_PACKET_SIZE_CMD_HDR_LEN 4
#define AUDIO_SETTING_CMD_LEN 8
#define AUDIO_SETTING_EVT_LEN 7
#define READ_PINMUX_CMD_LEN 8
#define READ_PINMUX_EVT_CMP_LEN 6
#define READ_PINMUX_EVT_REAL_LEN 11
#define WRITE_PINMUX_CMD_LEN 12
#define WRITE_PINMUX_EVT_LEN 7
#define PINMUX_REG_NUM 2
#define WRITE_PINMUX_CMD_LEN_7902 7
#define WRITE_PINMUX_EVT_LEN_7902 7
#define PINMUX_REG_NUM_7902 4
#define FW_VERSION_BUF_SIZE 256
#define FW_VERSION_KEY_WORDS "t-neptune"
#if BUILD_QA_DBG
#define CFG_SHOW_FULL_MACADDR 1
#else
#define CFG_SHOW_FULL_MACADDR 0
#endif
#if CFG_SHOW_FULL_MACADDR
#define MACSTR "%02X:%02X:%02X:%02X:%02X:%02X"
#define MAC2STR(a) ((unsigned char *)a)[0], ((unsigned char *)a)[1], ((unsigned char *)a)[2],\
((unsigned char *)a)[3], ((unsigned char *)a)[4], ((unsigned char *)a)[5]
#else
#define MACSTR "%02X:%02X:**:**:**:%02X"
#define MAC2STR(a) ((unsigned char *)a)[0], ((unsigned char *)a)[1], ((unsigned char *)a)[5]
#endif
enum {
RES_1 = 0,
RES_DOT_5,
RES_DOT_25
};
enum {
CHECK_SINGLE_SKU_PWR_MODE = 0,
CHECK_SINGLE_SKU_EDR_MAX,
CHECK_SINGLE_SKU_BLE,
CHECK_SINGLE_SKU_BLE_2M,
CHECK_SINGLE_SKU_BLE_LR_S2,
CHECK_SINGLE_SKU_BLE_LR_S8,
CHECK_SINGLE_SKU_ALL
};
enum {
DISABLE_LV9 = 0,
ENABLE_LV9
};
enum {
DIFF_MODE_3DB = 0,
DIFF_MODE_0DB
};
struct btmtk_cif_state {
unsigned char ops_enter;
unsigned char ops_end;
unsigned char ops_error;
};
enum TX_TYPE {
BTMTK_TX_CMD_FROM_DRV = 0, /* send hci cmd and wmt cmd by driver */
BTMTK_TX_ACL_FROM_DRV, /* send acl pkt with load rompatch by driver */
BTMTK_TX_PKT_FROM_HOST, /* send pkt from host, include acl and hci */
};
enum bt_state {
FUNC_OFF = 0,
TURNING_ON = 1,
PRE_ON_AFTER_CAL = 2,
FUNC_ON = 3,
RESET_START = 4,
RESET_END = 5
};
struct bt_power_setting {
int8_t EDR_Max;
int8_t LV9;
int8_t DM;
int8_t IR;
int8_t BLE_1M;
int8_t BLE_2M;
int8_t BLE_LR_S2;
int8_t BLE_LR_S8;
char country_code[COUNTRY_CODE_LEN + 1];
};
enum {
BTMTK_DONGLE_STATE_UNKNOWN,
BTMTK_DONGLE_STATE_POWER_ON,
BTMTK_DONGLE_STATE_POWER_OFF,
BTMTK_DONGLE_STATE_ERROR,
};
enum {
HW_ERR_NONE = 0x00,
HW_ERR_CODE_CHIP_RESET = 0xF0,
HW_ERR_CODE_USB_DISC = 0xF1,
HW_ERR_CODE_CORE_DUMP = 0xF2,
HW_ERR_CODE_POWER_ON = 0xF3,
HW_ERR_CODE_POWER_OFF = 0xF4,
HW_ERR_CODE_SET_SLEEP_CMD = 0xF5,
HW_ERR_CODE_RESET_STACK_AFTER_WOBLE = 0xF6,
};
/* Please keep sync with btmtk_set_state function */
enum {
/* BTMTK_STATE_UNKNOWN = 0, */
BTMTK_STATE_INIT = 1,
BTMTK_STATE_DISCONNECT,
BTMTK_STATE_PROBE,
BTMTK_STATE_WORKING,
BTMTK_STATE_SUSPEND,
BTMTK_STATE_RESUME,
BTMTK_STATE_FW_DUMP,
BTMTK_STATE_STANDBY,
BTMTK_STATE_SUBSYS_RESET,
BTMTK_STATE_SEND_ASSERT,
BTMTK_STATE_MSG_NUM
};
/* Please keep sync with btmtk_fops_set_state function */
enum {
/* BTMTK_FOPS_STATE_UNKNOWN = 0, */
BTMTK_FOPS_STATE_INIT = 1,
BTMTK_FOPS_STATE_OPENING, /* during opening */
BTMTK_FOPS_STATE_OPENED, /* open in fops_open */
BTMTK_FOPS_STATE_CLOSING, /* during closing */
BTMTK_FOPS_STATE_CLOSED, /* closed */
BTMTK_FOPS_STATE_MSG_NUM
};
enum {
BTMTK_EVENT_COMPARE_STATE_UNKNOWN,
BTMTK_EVENT_COMPARE_STATE_NOTHING_NEED_COMPARE,
BTMTK_EVENT_COMPARE_STATE_NEED_COMPARE,
BTMTK_EVENT_COMPARE_STATE_COMPARE_SUCCESS,
};
enum {
HCI_SNOOP_TYPE_CMD_STACK = 0,
HCI_SNOOP_TYPE_CMD_HIF,
HCI_SNOOP_TYPE_EVT_STACK,
HCI_SNOOP_TYPE_EVT_HIF,
HCI_SNOOP_TYPE_ADV_EVT_STACK,
HCI_SNOOP_TYPE_ADV_EVT_HIF,
HCI_SNOOP_TYPE_NOCP_EVT_STACK,
HCI_SNOOP_TYPE_NOCP_EVT_HIF,
HCI_SNOOP_TYPE_TX_ACL_STACK,
HCI_SNOOP_TYPE_TX_ACL_HIF,
HCI_SNOOP_TYPE_RX_ACL_STACK,
HCI_SNOOP_TYPE_RX_ACL_HIF,
HCI_SNOOP_TYPE_TX_ISO_STACK,
HCI_SNOOP_TYPE_TX_ISO_HIF,
HCI_SNOOP_TYPE_RX_ISO_STACK,
HCI_SNOOP_TYPE_RX_ISO_HIF,
HCI_SNOOP_TYPE_MAX
};
enum {
DEBUG_SOP_SLEEP,
DEBUG_SOP_WAKEUP,
DEBUG_SOP_NO_RESPONSE,
DEBUG_SOP_NONE
};
struct dump_debug_cr {
u32 addr_w;
u32 value_w;
u32 addr_r;
};
struct h4_recv_pkt {
u8 type; /* Packet type */
u8 hlen; /* Header length */
u8 loff; /* Data length offset in header */
u8 lsize; /* Data length field size */
u16 maxlen; /* Max overall packet length */
int (*recv)(struct hci_dev *hdev, struct sk_buff *skb);
};
#pragma pack(1)
struct _PATCH_HEADER {
u8 ucDateTime[16];
u8 ucPlatform[4];
u16 u2HwVer;
u16 u2SwVer;
u32 u4MagicNum;
};
struct _Global_Descr {
u32 u4PatchVer;
u32 u4SubSys;
u32 u4FeatureOpt;
u32 u4SectionNum;
};
struct _Section_Map {
u32 u4SecType;
u32 u4SecOffset;
u32 u4SecSize;
union {
u32 u4SecSpec[SECTION_SPEC_NUM];
struct {
u32 u4DLAddr;
u32 u4DLSize;
u32 u4SecKeyIdx;
u32 u4AlignLen;
u32 u4SecType;
u32 u4DLModeCrcType;
u32 u4Crc;
u32 reserved[6];
} bin_info_spec;
};
};
#pragma pack()
#define H4_RECV_ACL \
.type = HCI_ACLDATA_PKT, \
.hlen = HCI_ACL_HDR_SIZE, \
.loff = 2, \
.lsize = 2, \
.maxlen = HCI_MAX_FRAME_SIZE \
#define H4_RECV_SCO \
.type = HCI_SCODATA_PKT, \
.hlen = HCI_SCO_HDR_SIZE, \
.loff = 2, \
.lsize = 1, \
.maxlen = HCI_MAX_SCO_SIZE
#define H4_RECV_EVENT \
.type = HCI_EVENT_PKT, \
.hlen = HCI_EVENT_HDR_SIZE, \
.loff = 1, \
.lsize = 1, \
.maxlen = HCI_MAX_EVENT_SIZE
/* yumin todo */
// TODO: replace by kernel constant if kernel support new spec
#define HCI_ISODATA_PKT 0x05
#define HCI_ISO_HDR_SIZE 4
#define H4_RECV_ISO \
.type = HCI_ISODATA_PKT, \
.hlen = HCI_ISO_HDR_SIZE, \
.loff = 2, \
.lsize = 2, \
.maxlen = HCI_MAX_FRAME_SIZE
struct btmtk_dev {
struct hci_dev *hdev;
unsigned long hdev_flags;
unsigned long flags;
void *intf_dev;
void *cif_dev;
struct work_struct work;
struct work_struct waker;
struct work_struct reset_waker;
struct timer_list chip_reset_timer;
int recv_evt_len;
int tx_in_flight;
spinlock_t txlock;
spinlock_t rxlock;
struct sk_buff *evt_skb;
struct sk_buff *sco_skb;
/* For ble iso packet size */
int iso_threshold;
unsigned int sco_num;
int isoc_altsetting;
int suspend_count;
/* For tx queue */
unsigned long tx_state;
/* For rx queue */
struct workqueue_struct *workqueue;
struct sk_buff_head rx_q;
struct work_struct rx_work;
struct sk_buff *rx_skb;
wait_queue_head_t p_wait_event_q;
unsigned int subsys_reset;
unsigned int chip_reset;
unsigned char *rom_patch_bin_file_name;
unsigned int chip_id;
unsigned int flavor;
unsigned int dualBT;
unsigned int fw_version;
unsigned char dongle_index;
unsigned char power_state;
unsigned char fops_state;
unsigned char interface_state;
struct btmtk_cif_state *cif_state;
/* io buffer for usb control transfer */
unsigned char *io_buf;
unsigned char *setting_file;
unsigned char bdaddr[BD_ADDRESS_SIZE];
unsigned char *bt_cfg_file_name;
struct bt_cfg_struct bt_cfg;
/* single sku */
unsigned char *country_file_name;
int get_hci_reset;
/* debug sop */
struct debug_reg_struct debug_sop_reg_dump;
unsigned char debug_sop_file_name[MAX_BIN_FILE_NAME_LEN];
};
typedef int (*cif_bt_init_ptr)(void);
typedef void (*cif_bt_exit_ptr)(void);
typedef int (*cif_open_ptr)(struct hci_dev *hdev);
typedef int (*cif_close_ptr)(struct hci_dev *hdev);
typedef int (*cif_reg_read_ptr)(struct btmtk_dev *bdev, u32 reg, u32 *val);
typedef int (*cif_reg_write_ptr)(struct btmtk_dev *bdev, u32 reg, u32 val);
typedef int (*cif_send_cmd_ptr)(struct btmtk_dev *bdev, struct sk_buff *skb,
int delay, int retry, int pkt_type);
typedef int (*cif_send_and_recv_ptr)(struct btmtk_dev *bdev,
struct sk_buff *skb,
const uint8_t *event, const int event_len,
int delay, int retry, int pkt_type);
typedef int (*cif_event_filter_ptr)(struct btmtk_dev *bdev, struct sk_buff *skb);
typedef int (*cif_subsys_reset_ptr)(struct btmtk_dev *bdev);
typedef int (*cif_whole_reset_ptr)(struct btmtk_dev *bdev);
typedef void (*cif_chip_reset_notify_ptr)(struct btmtk_dev *bdev);
typedef void (*cif_mutex_lock_ptr)(struct btmtk_dev *bdev);
typedef void (*cif_mutex_unlock_ptr)(struct btmtk_dev *bdev);
typedef int (*cif_flush_ptr)(struct btmtk_dev *bdev);
typedef void (*cif_log_init_ptr)(void);
typedef void (*cif_log_register_cb_ptr)(void (*func)(void));
typedef ssize_t (*cif_log_read_to_user_ptr)(char __user *buf, size_t count);
typedef unsigned int (*cif_log_get_buf_size_ptr)(void);
typedef void (*cif_log_deinit_ptr)(void);
typedef void (*cif_open_done_ptr)(struct btmtk_dev *bdev);
typedef int (*cif_dl_dma_ptr)(struct btmtk_dev *bdev, u8 *image,
u8 *fwbuf, int section_dl_size, int section_offset);
typedef void (*cif_dump_debug_sop_ptr)(struct btmtk_dev *bdev);
typedef void (*cif_waker_notify_ptr)(struct btmtk_dev *bdev);
typedef int (*cif_enter_standby_ptr)(void);
struct hif_hook_ptr {
cif_bt_init_ptr init;
cif_bt_exit_ptr exit;
cif_open_ptr open;
cif_close_ptr close;
cif_reg_read_ptr reg_read;
cif_reg_write_ptr reg_write;
cif_send_cmd_ptr send_cmd;
cif_send_and_recv_ptr send_and_recv;
cif_event_filter_ptr event_filter;
cif_subsys_reset_ptr subsys_reset;
cif_whole_reset_ptr whole_reset;
cif_chip_reset_notify_ptr chip_reset_notify;
cif_mutex_lock_ptr cif_mutex_lock;
cif_mutex_unlock_ptr cif_mutex_unlock;
cif_flush_ptr flush;
cif_log_init_ptr log_init;
cif_log_register_cb_ptr log_register_cb;
cif_log_read_to_user_ptr log_read_to_user;
cif_log_get_buf_size_ptr log_get_buf_size;
cif_log_deinit_ptr log_deinit;
cif_open_done_ptr open_done;
cif_dl_dma_ptr dl_dma;
cif_dump_debug_sop_ptr dump_debug_sop;
cif_waker_notify_ptr waker_notify;
cif_enter_standby_ptr enter_standby;
};
struct hci_snoop {
u8 buf[HCI_SNOOP_ENTRY_NUM][HCI_SNOOP_MAX_BUF_SIZE];
u8 len[HCI_SNOOP_ENTRY_NUM];
u16 actual_len[HCI_SNOOP_ENTRY_NUM];
char timestamp[HCI_SNOOP_ENTRY_NUM][HCI_SNOOP_TS_STR_LEN];
u8 index;
};
struct btmtk_main_info {
int chip_reset_flag;
atomic_t subsys_reset;
atomic_t chip_reset;
atomic_t subsys_reset_count;
atomic_t whole_reset_count;
atomic_t subsys_reset_conti_count;
u8 reset_stack_flag;
struct wakeup_source *fwdump_ws;
struct wakeup_source *woble_ws;
struct wakeup_source *eint_ws;
#if WAKEUP_BT_IRQ
struct wakeup_source *irq_ws;
#endif
struct hif_hook_ptr hif_hook;
struct bt_power_setting PWS;
/* save Hci Snoop for debug*/
struct hci_snoop snoop[HCI_SNOOP_TYPE_MAX];
u8 wmt_over_hci_header[WMT_OVER_HCI_HEADER_SIZE];
u8 read_iso_packet_size_cmd[READ_ISO_PACKET_CMD_SIZE];
/* record firmware version */
struct proc_dir_entry *proc_dir;
char fw_version_str[FW_VERSION_BUF_SIZE];
atomic_t fwlog_ref_cnt;
};
static inline int is_mt6639(u32 chip_id)
{
chip_id &= 0xFFFF;
if (chip_id == 0x6639)
return 1;
return 0;
}
static inline int is_mt7902(u32 chip_id)
{
chip_id &= 0xFFFF;
if (chip_id == 0x7902)
return 1;
return 0;
}
static inline int is_mt7922(u32 chip_id)
{
chip_id &= 0xFFFF;
if (chip_id == 0x7922)
return 1;
return 0;
}
static inline int is_mt7961(u32 chip_id)
{
chip_id &= 0xFFFF;
if (chip_id == 0x7961)
return 1;
return 0;
}
static inline int is_mt66xx(u32 chip_id)
{
chip_id &= 0xFFFF;
if (chip_id == 0x6631 || chip_id == 0x6635)
return 1;
return 0;
}
/* Get BT whole packet length except hci type */
static inline unsigned int get_pkt_len(unsigned char type, unsigned char *buf)
{
unsigned int len = 0;
switch (type) {
/* Please reference hci header format
* AA = len
* xx = buf[0]
* cmd : 01 xx yy AA + payload
* acl : 02 xx yy AA AA + payload
* sco : 03 xx yy AA + payload
* evt : 04 xx AA + payload
* ISO : 05 xx yy AA AA + payload
*/
case HCI_COMMAND_PKT:
len = buf[2] + 3;
break;
case HCI_ACLDATA_PKT:
len = buf[2] + ((buf[3] << 8) & 0xff00) + 4;
break;
case HCI_SCODATA_PKT:
len = buf[2] + 3;
break;
case HCI_EVENT_PKT:
len = buf[1] + 2;
break;
case HCI_ISO_PKT:
len = buf[2] + (((buf[3] & 0x3F) << 8) & 0xff00) + HCI_ISO_PKT_HEADER_SIZE;
break;
default:
len = 0;
}
return len;
}
unsigned char btmtk_get_chip_state(struct btmtk_dev *bdev);
void btmtk_set_chip_state(struct btmtk_dev *bdev, unsigned char new_state);
int btmtk_allocate_hci_device(struct btmtk_dev *bdev, int hci_bus_type);
void btmtk_free_hci_device(struct btmtk_dev *bdev, int hci_bus_type);
int btmtk_register_hci_device(struct btmtk_dev *bdev);
int btmtk_deregister_hci_device(struct btmtk_dev *bdev);
int btmtk_recv(struct hci_dev *hdev, const u8 *data, size_t count);
int btmtk_recv_event(struct hci_dev *hdev, struct sk_buff *skb);
int btmtk_recv_acl(struct hci_dev *hdev, struct sk_buff *skb);
int btmtk_recv_iso(struct hci_dev *hdev, struct sk_buff *skb);
int btmtk_send_init_cmds(struct btmtk_dev *hdev);
int btmtk_send_deinit_cmds(struct btmtk_dev *hdev);
int btmtk_load_rom_patch(struct btmtk_dev *bdev);
struct btmtk_dev *btmtk_get_dev(void);
int btmtk_cap_init(struct btmtk_dev *bdev);
struct btmtk_main_info *btmtk_get_main_info(void);
int btmtk_get_interface_num(void);
int btmtk_reset_power_on(struct btmtk_dev *bdev);
void btmtk_send_hw_err_to_host(struct btmtk_dev *bdev);
void btmtk_free_setting_file(struct btmtk_dev *bdev);
unsigned char btmtk_fops_get_state(struct btmtk_dev *bdev);
void btmtk_hci_snoop_save(unsigned int type, u8 *buf, u32 len);
void btmtk_hci_snoop_print(const u8 *buf, u32 len);
void btmtk_hci_snoop_print_to_log(void);
void *btmtk_kallsyms_lookup_name(const char *name);
void btmtk_get_UTC_time_str(char *ts_str);
void btmtk_reg_hif_hook(struct hif_hook_ptr *hook);
int btmtk_main_cif_initialize(struct btmtk_dev *bdev, int hci_bus);
void btmtk_main_cif_uninitialize(struct btmtk_dev *bdev, int hci_bus);
int btmtk_main_cif_disconnect_notify(struct btmtk_dev *bdev, int hci_bus);
int btmtk_load_code_from_bin(u8 **image, char *bin_name,
struct device *dev, u32 *code_len, u8 retry);
int btmtk_main_send_cmd(struct btmtk_dev *bdev, const uint8_t *cmd,
const int cmd_len, const uint8_t *event, const int event_len, int delay,
int retry, int pkt_type);
int btmtk_load_code_from_setting_files(char *setting_file_name,
struct device *dev, u32 *code_len, struct btmtk_dev *bdev);
int btmtk_load_fw_cfg_setting(char *block_name, struct fw_cfg_struct *save_content,
int counter, u8 *searchcontent, enum fw_cfg_index_len index_length);
int btmtk_send_assert_cmd(struct btmtk_dev *bdev);
void btmtk_free_fw_cfg_struct(struct fw_cfg_struct *fw_cfg, int count);
struct btmtk_dev **btmtk_get_pp_bdev(void);
void btmtk_load_debug_sop_register(char *debug_sop_name, struct device *dev, struct btmtk_dev *bdev);
void btmtk_clean_debug_reg_file(struct btmtk_dev *bdev);
int32_t btmtk_set_sleep(struct hci_dev *hdev, u_int8_t need_wait);
int32_t bgfsys_bt_patch_dl(void);
int btmtk_efuse_read(struct btmtk_dev *bdev, u16 addr, u8 *value);
void btmtk_set_country_code_from_wifi(char *code);
#endif /* __BTMTK_MAIN_H__ */

View File

@@ -0,0 +1,61 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2018 MediaTek Inc.
*/
#ifndef __BTMTK_WOBLE_H__
#define __BTMTK_WOBLE_H__
#include "btmtk_define.h"
#include "btmtk_main.h"
/* Define for WoBLE */
#define WOBLE_SETTING_COUNT 10
#define WOBLE_EVENT_INTERVAL_TIMO 500
#define WOBLE_COMP_EVENT_TIMO 5000
/* WOBX attribute type */
#define WOBX_TRIGGER_INFO_ADDR_TYPE 1
#define WOBX_TRIGGER_INFO_ADV_DATA_TYPE 2
#define WOBX_TRIGGER_INFO_TRACE_LOG_TYPE 3
#define WOBX_TRIGGER_INFO_SCAN_LOG_TYPE 4
#define WOBX_TRIGGER_INFO_TRIGGER_CNT_TYPE 5
struct btmtk_woble {
unsigned char *woble_setting_file_name;
unsigned int woble_setting_len;
struct fw_cfg_struct woble_setting_apcf[WOBLE_SETTING_COUNT];
struct fw_cfg_struct woble_setting_apcf_fill_mac[WOBLE_SETTING_COUNT];
struct fw_cfg_struct woble_setting_apcf_fill_mac_location[WOBLE_SETTING_COUNT];
struct fw_cfg_struct woble_setting_radio_off;
struct fw_cfg_struct woble_setting_wakeup_type;
struct fw_cfg_struct woble_setting_radio_off_status_event;
/* complete event */
struct fw_cfg_struct woble_setting_radio_off_comp_event;
struct fw_cfg_struct woble_setting_radio_on;
struct fw_cfg_struct woble_setting_radio_on_status_event;
struct fw_cfg_struct woble_setting_radio_on_comp_event;
/* set apcf after resume(radio on) */
struct fw_cfg_struct woble_setting_apcf_resume[WOBLE_SETTING_COUNT];
/* Foe Woble eint */
unsigned int wobt_irq;
int wobt_irqlevel;
atomic_t irq_enable_count;
struct input_dev *WoBLEInputDev;
void *bdev;
};
int btmtk_woble_suspend(struct btmtk_woble *bt_woble);
int btmtk_woble_resume(struct btmtk_woble *bt_woble);
int btmtk_woble_initialize(struct btmtk_dev *bdev, struct btmtk_woble *bt_woble);
void btmtk_woble_uninitialize(struct btmtk_woble *bt_woble);
void btmtk_woble_wake_unlock(struct btmtk_dev *bdev);
#if WAKEUP_BT_IRQ
void btmtk_sdio_irq_wake_lock_timeout(struct btmtk_dev *bdev);
#endif
int btmtk_send_apcf_reserved(struct btmtk_dev *bdev);
#endif /* __BTMTK_WOBLE_H__ */

View File

@@ -0,0 +1,182 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2016,2017 MediaTek Inc.
*/
#ifndef _BTMTK_SDIO_H_
#define _BTMTK_SDIO_H_
/* It's for reset procedure */
#include <linux/mmc/sdio_ids.h>
#include <linux/module.h>
#include <linux/of_gpio.h>
#include <linux/mmc/host.h>
#include <linux/mmc/card.h>
#include <linux/mmc/sdio.h>
#include <linux/mmc/sdio_func.h>
#include "btmtk_define.h"
#include "btmtk_main.h"
#include "btmtk_woble.h"
#include "btmtk_buffer_mode.h"
#include "btmtk_chip_reset.h"
#ifndef BTMTK_SDIO_DEBUG
#define BTMTK_SDIO_DEBUG 0
#endif
/**
* Card-relate definition.
*/
#ifndef SDIO_VENDOR_ID_MEDIATEK
#define SDIO_VENDOR_ID_MEDIATEK 0x037A
#endif
#define HCI_HEADER_LEN 4
#define MTK_STP_TLR_SIZE 2
#define STP_HEADER_LEN 4
#define STP_HEADER_CRC_LEN 2
#define HCI_MAX_COMMAND_SIZE 255
#define URB_MAX_BUFFER_SIZE (4*1024)
#define BTMTK_SDIO_FUNC 2
/* common register address */
#define CCIR 0x0000
#define CHLPCR 0x0004
#define CSDIOCSR 0x0008
#define CHCR 0x000C
#define CHISR 0x0010
#define CHIER 0x0014
#define CTDR 0x0018
#define CRDR 0x001C
#define CTFSR 0x0020
#define CRPLR 0x0024
#define CSICR 0x00C0
#define PD2HRM0R 0x00DC
#define SWPCDBGR 0x0154
#define PH2DSM0R 0x00C4
/* PH2DSM0R*/
#define PH2DSM0R_DRIVER_OWN 0x00000001
/* CHLPCR */
#define C_FW_INT_EN_SET 0x00000001
#define C_FW_INT_EN_CLEAR 0x00000002
/* CHISR */
#define RX_PKT_LEN 0xFFFF0000
#define FIRMWARE_INT 0x0000FE00
/* PD2HRM0R */
#define PD2HRM0R_DRIVER_OWN 0x00000001
#define PD2HRM0R_FW_OWN 0x00000000
/* MCU notify host dirver for L0.5 reset */
#define FIRMWARE_INT_BIT31 0x80000000
/* MCU notify host driver for coredump */
#define FIRMWARE_INT_BIT15 0x00008000
#define TX_FIFO_OVERFLOW 0x00000100
#define FW_INT_IND_INDICATOR 0x00000080
#define TX_COMPLETE_COUNT 0x00000070
#define TX_UNDER_THOLD 0x00000008
#define TX_EMPTY 0x00000004
#define RX_DONE 0x00000002
#define FW_OWN_BACK_INT 0x00000001
/* MCU address offset */
#define MCU_ADDRESS_OFFSET_CMD 12
#define MCU_ADDRESS_OFFSET_EVT 16
/* wifi CR */
#define CONDBGCR 0x0034
#define CONDBGCR_SEL 0x0040
#define SDIO_CTRL_EN (1 << 31)
#define WM_MONITER_SEL (~(0x40000000))
#define PC_MONITER_SEL (~(0x20000000))
#define PC_IDX_SWH(val, idx) ((val & (~(0x3F << 16))) | ((0x3F & idx) << 16))
typedef int (*pdwnc_func) (u8 fgReset);
typedef int (*reset_func_ptr2) (unsigned int gpio, int init_value);
typedef int (*set_gpio_low)(u8 gpio);
typedef int (*set_gpio_high)(u8 gpio);
/**
* Send cmd dispatch evt
*/
#define HCI_EV_VENDOR 0xff
#define SDIO_BLOCK_SIZE 512
#define SDIO_RW_RETRY_COUNT 500
#define MTK_SDIO_PACKET_HEADER_SIZE 4
/* Driver & FW own related */
#define DRIVER_OWN 0
#define FW_OWN 1
#define SET_OWN_LOOP_COUNT 10
/* CMD&Event sent by driver */
#define READ_REGISTER_CMD_LEN 16
#define READ_REGISTER_EVT_HDR_LEN 11
#define FW_ASSERT_CMD_LEN 4
#define FW_ASSERT_CMD1_LEN 9
#define NOTIFY_ALT_EVT_LEN 7
#define READ_ADDRESS_EVT_HDR_LEN 7
#define READ_ADDRESS_EVT_PAYLOAD_OFFSET 7
#define WOBLE_DEBUG_EVT_TYPE 0xE8
#define BLE_EVT_TYPE 0x3E
#define LD_PATCH_CMD_LEN 10
#define LD_PATCH_EVT_LEN 8
#define BTMTK_SDIO_THREAD_STOP (1 << 0)
#define BTMTK_SDIO_THREAD_TX (1 << 1)
#define BTMTK_SDIO_THREAD_RX (1 << 2)
#define BTMTK_SDIO_THREAD_FW_OWN (1 << 3)
#define FW_OWN_TIMEOUT 30
#define FW_OWN_TIMER_INIT 0
#define FW_OWN_TIMER_RUNNING 1
#define CHECK_THREAD_RETRY_TIMES 50
struct btmtk_sdio_hdr {
/* For SDIO Header */
__le16 len;
__le16 reserved;
/* For hci type */
u8 bt_type;
} __packed;
struct btmtk_sdio_thread {
struct task_struct *task;
wait_queue_head_t wait_q;
void *priv;
atomic_t thread_status;
};
struct btmtk_sdio_dev {
struct sdio_func *func;
struct btmtk_dev *bdev;
bool patched;
bool no_fw_own;
atomic_t int_count;
atomic_t tx_rdy;
/* TODO, need to confirm the max size of urb data, also need to confirm
* whether intr_complete and bulk_complete and soc_complete can all share
* this urb_transfer_buf
*/
unsigned char *transfer_buf;
unsigned char *sdio_packet;
struct sk_buff_head tx_queue;
struct btmtk_sdio_thread sdio_thread;
struct btmtk_woble bt_woble;
struct btmtk_buffer_mode_struct *buffer_mode;
struct timer_list fw_own_timer;
atomic_t fw_own_timer_flag;
};
int btmtk_sdio_read_bt_mcu_pc(u32 *val);
int btmtk_sdio_read_conn_infra_pc(u32 *val);
#endif

View File

@@ -0,0 +1,59 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2016,2017 MediaTek Inc.
*/
#ifndef _BTMTK_UART_H_
#define _BTMTK_UART_H_
#include <linux/serdev.h>
#include "btmtk_define.h"
#include "btmtk_main.h"
#include "btmtk_buffer_mode.h"
#ifndef UART_DEBUG
#define UART_DEBUG 0
#endif
/**
* Card-relate definition.
*/
#define HCI_HEADER_LEN 4
#define MTK_STP_TLR_SIZE 2
#define STP_HEADER_LEN 4
#define STP_HEADER_CRC_LEN 2
#define HCI_MAX_COMMAND_SIZE 255
#define MAX_BUFFER_SIZE (4*1024)
/* CMD&Event sent by driver */
#define READ_REGISTER_CMD_LEN 16
#define READ_REGISTER_EVT_HDR_LEN 11
/* MCU address offset */
#define MCU_ADDRESS_OFFSET_CMD 12
#define MCU_ADDRESS_OFFSET_EVT 16
typedef int (*pdwnc_func) (u8 fgReset);
typedef int (*reset_func_ptr2) (unsigned int gpio, int init_value);
typedef int (*set_gpio_low)(u8 gpio);
typedef int (*set_gpio_high)(u8 gpio);
/**
* Send cmd dispatch evt
*/
#define HCI_EV_VENDOR 0xff
#define READ_ADDRESS_EVT_HDR_LEN 7
#define READ_ADDRESS_EVT_PAYLOAD_OFFSET 7
#define WOBLE_DEBUG_EVT_TYPE 0xE8
#define LD_PATCH_CMD_LEN 10
#define LD_PATCH_EVT_LEN 8
struct btmtk_uart_dev {
struct serdev_device *serdev;
struct clk *clk;
struct clk *osc;
unsigned char *transfer_buf;
};
#endif

View File

@@ -0,0 +1,183 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2016,2017 MediaTek Inc.
*/
#ifndef _BTMTK_UART_H_
#define _BTMTK_UART_H_
#include "btmtk_define.h"
#include "btmtk_main.h"
#include "btmtk_buffer_mode.h"
#include "btmtk_woble.h"
#include "btmtk_chip_reset.h"
#include <linux/tty.h>
#include <linux/tty_driver.h>
#include <linux/serial.h>
#include <linux/of_device.h>
#include <linux/regulator/consumer.h>
#include <linux/gpio/consumer.h>
#include <linux/pinctrl/consumer.h>
#include <linux/clk.h>
#include <linux/suspend.h>
#define HCI_HEADER_LEN 4
struct mtk_stp_hdr {
u8 prefix;
__be16 dlen;
u8 cs;
} __packed;
#define MTK_STP_TLR_SIZE 2
#define STP_HEADER_LEN 4
#define STP_HEADER_CRC_LEN 2
#define BTMTKUART_FLAG_STANDALONE_HW BIT(0)
/* CMD&Event sent by driver */
#define READ_REGISTER_CMD_LEN 16
#define READ_REGISTER_EVT_HDR_LEN 11
#define WRITE_REGISTER_CMD_LEN 24
#define WRITE_REGISTER_EVT_HDR_LEN 11
/* MCU address offset */
#define MCU_ADDRESS_OFFSET_CMD 12
#define MCU_ADDRESS_OFFSET_EVT 16
/* MCU value offset */
#define MCU_VALUE_OFFSET_CMD 16
/* Pimux Address and Value */
#define BT_PINMUX_CTRL_REG 0x70005054
#define BT_SUBSYS_RST_PINMUX 0x00000020
#define BT_CTSRTS_PINMUX 0x00044000
#define BT_PINMUX_CTRL_ENABLE (BT_SUBSYS_RST_PINMUX | BT_CTSRTS_PINMUX)
#define BT_SUBSYS_RST_REG 0x70002610
#define BT_SUBSYS_RST_ENABLE 0x00000080
#define BT_REG_LEN 4
#define BT_REG_VALUE_LEN 4
/* MCU baud define */
#define BT_FLOWCTRL_OFFSET 12
#define BT_NONE_FC 0x00
#define BT_HW_FC 0x40
#define BT_SW_FC 0x80
#define BT_MTK_SW_FC 0xC0
/**
* Send cmd dispatch evt
*/
#define HCI_EV_VENDOR 0xff
#define READ_ADDRESS_EVT_HDR_LEN 7
#define READ_ADDRESS_EVT_PAYLOAD_OFFSET 7
#define WOBLE_DEBUG_EVT_TYPE 0xE8
#define LD_PATCH_CMD_LEN 10
#define LD_PATCH_EVT_LEN 8
#define SETBAUD_CMD_LEN 13
#define SETBAUD_EVT_LEN 9
#define GETBAUD_CMD_LEN 9
#define GETBAUD_EVT_LEN 9
#define BAUD_SIZE 4
#define WAKEUP_CMD_LEN 5
#define WAKEUP_EVT_LEN 9
#define FWOWN_CMD_LEN 9
#define DRVOWN_CMD_LEN 9
#define OWNTYPE_EVT_LEN 9
#define BT_UART_DEFAULT_BAUD 115200
/* Delay time between subsys reset GPIO pull low/high */
#define SUBSYS_RESET_GPIO_DELAY_TIME 50
/* Delay time after write data to io_buf */
#define IO_BUF_DELAY_TIME 50
typedef int (*pdwnc_func) (u8 fgReset);
typedef int (*reset_func_ptr2) (unsigned int gpio, int init_value);
typedef int (*set_gpio_low)(u8 gpio);
typedef int (*set_gpio_high)(u8 gpio);
enum UART_FC {
UART_DISABLE_FC = 0, /*NO flow control*/
/*MTK SW Flow Control, differs from Linux Flow Control*/
UART_MTK_SW_FC = 1,
UART_LINUX_FC = 2, /*Linux SW Flow Control*/
UART_HW_FC = 3, /*HW Flow Control*/
};
struct UART_CONFIG {
enum UART_FC fc;
int parity;
int stop_bit;
int iBaudrate;
};
struct btmtk_uart_data {
unsigned int flags;
const char *fwname;
};
struct btmtk_uart_dev {
struct hci_dev *hdev;
struct tty_struct *tty;
unsigned long hdev_flags;
/* For tx queue */
struct sk_buff_head tx_queue;
spinlock_t tx_lock;
struct task_struct *tx_task;
unsigned long tx_state;
/* For rx queue */
struct sk_buff *rx_skb;
unsigned long rx_state;
struct sk_buff *evt_skb;
wait_queue_head_t p_wait_event_q;
unsigned int subsys_reset;
unsigned int dongle_state;
unsigned int uart_baudrate_set;
u8 stp_pad[6];
u8 stp_cursor;
u16 stp_dlen;
struct UART_CONFIG uart_cfg;
struct btmtk_woble bt_woble;
};
#define btmtk_uart_is_standalone(bdev) \
((bdev)->data->flags & BTMTKUART_FLAG_STANDALONE_HW)
#define btmtk_uart_is_builtin_soc(bdev) \
!((bdev)->data->flags & BTMTKUART_FLAG_STANDALONE_HW)
/**
* Maximum rom patch file name length
*/
#define N_MTK (15+1)
/**
* Upper layeard IOCTL
*/
#define HCIUARTSETPROTO _IOW('U', 200, int)
#define HCIUARTSETBAUD _IOW('U', 201, int)
#define HCIUARTGETBAUD _IOW('U', 202, int)
#define HCIUARTSETSTP _IOW('U', 203, int)
#define HCIUARTLOADPATCH _IOW('U', 204, int)
#define HCIUARTSETWAKEUP _IOW('U', 205, int)
#define HCIUARTINIT _IOW('U', 206, int)
//int btmtk_cif_send_calibration(struct hci_dev *hdev);
#endif

View File

@@ -0,0 +1,116 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2016,2017 MediaTek Inc.
*/
#ifndef _BTMTK_USB_H_
#define _BTMTK_USB_H_
#include <linux/usb.h>
#include "btmtk_define.h"
#include "btmtk_main.h"
#include "btmtk_woble.h"
#include "btmtk_chip_reset.h"
#define HCI_MAX_COMMAND_SIZE 255
#define URB_MAX_BUFFER_SIZE (4*1024)
#define BT0_MCU_INTERFACE_NUM 0
#define BT1_MCU_INTERFACE_NUM 3
#define BT_MCU_INTERFACE_NUM_MAX 4
#define BT_MCU_NUM_MAX 2
typedef int (*pdwnc_func) (u8 fgReset);
typedef int (*reset_func_ptr2) (unsigned int gpio, int init_value);
typedef int (*set_gpio_low)(u8 gpio);
typedef int (*set_gpio_high)(u8 gpio);
/**
* Send cmd dispatch evt
*/
#define HCI_EV_VENDOR 0xff
#define HCI_USB_IO_BUF_SIZE 256
/* UHW CR mapping */
#define BT_MISC 0x70002510
#define MCU_BT0_INIT_DONE (0x1 << 8)
#define MCU_BT1_INIT_DONE (0x1 << 9)
#define BT_SUBSYS_RST 0x70002610
#define BT_SUBSYS_RST_6639 0x70028610
#define UDMA_INT_STA_BT 0x74000024
#define UDMA_INT_STA_BT1 0x74000308
#define BT_WDT_STATUS 0x740003A0
#define EP_RST_OPT 0x74011890
#define EP_RST_IN_OUT_OPT 0x00010001
#define BT_GDMA_DONE_ADDR_W 0x74000A0C
#define BT_GDMA_DONE_7921_VALUE_W 0x00403FA9
#define BT_GDMA_DONE_7922_VALUE_W 0x00403EA9
#define BT_GDMA_DONE_7902_VALUE_W 0x00403EA9
#define BT_GDMA_DONE_ADDR_R 0x74000A08
#define BT_GDMA_DONE_VALUE_R 0xFFFFFFFB /* bit2: 0 - dma done, 1 - dma doing */
/* CMD&Event sent by driver */
#define NOTIFY_ALT_EVT_LEN 7
#define LD_PATCH_CMD_LEN 9
#define LD_PATCH_EVT_LEN 8
#define READ_ADDRESS_EVT_HDR_LEN 7
#define READ_ADDRESS_EVT_PAYLOAD_OFFSET 7
#define WOBLE_DEBUG_EVT_TYPE 0xE8
#define BLE_EVT_TYPE 0x3E
#define WMT_TRIGGER_ASSERT_LEN 9
struct btmtk_cif_chip_reset {
/* For Whole chip reset */
pdwnc_func pf_pdwndFunc;
reset_func_ptr2 pf_resetFunc2;
set_gpio_low pf_lowFunc;
set_gpio_high pf_highFunc;
};
struct btmtk_usb_dev {
struct usb_endpoint_descriptor *intr_ep;
/* EP10 OUT */
struct usb_endpoint_descriptor *intr_iso_tx_ep;
/* EP10 IN */
struct usb_endpoint_descriptor *intr_iso_rx_ep;
/* BULK CMD EP1 OUT or EP 11 OUT */
struct usb_endpoint_descriptor *bulk_cmd_tx_ep;
/* EP15 in for reset */
struct usb_endpoint_descriptor *reset_intr_ep;
struct usb_endpoint_descriptor *bulk_tx_ep;
struct usb_endpoint_descriptor *bulk_rx_ep;
struct usb_endpoint_descriptor *isoc_tx_ep;
struct usb_endpoint_descriptor *isoc_rx_ep;
struct usb_device *udev;
struct usb_interface *intf;
struct usb_interface *isoc;
struct usb_interface *iso_channel;
struct usb_anchor tx_anchor;
struct usb_anchor intr_anchor;
struct usb_anchor bulk_anchor;
struct usb_anchor isoc_anchor;
struct usb_anchor ctrl_anchor;
struct usb_anchor ble_isoc_anchor;
__u8 cmdreq_type;
__u8 cmdreq;
int new_isoc_altsetting;
int new_isoc_altsetting_interface;
unsigned char *o_usb_buf;
unsigned char *urb_intr_buf;
unsigned char *urb_bulk_buf;
unsigned char *urb_ble_isoc_buf;
struct btmtk_woble bt_woble;
};
#endif

View File

@@ -0,0 +1,3 @@
# load btmtksdio
on boot
insmod /vendor/lib/modules/btmtk_sdio_unify.ko

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,29 @@
THIS_COMPONENT = usb3
ifdef LINUX_DRV_ROOT
export DRV_ROOT = $(LINUX_DRV_ROOT)
else
export DRV_ROOT = $(TARGET_OPEN_ROOT)
endif
SRC =
OBJ =
SUB_COMPONENTS = mt7668
OPTIONAL_SUB_COMPONENTS =
DEFINES +=
CC_INC +=
#############################################################################
#
# Include the makefile common to all components
#
#############################################################################
include $(DRV_ROOT)/driver.mak

View File

@@ -0,0 +1,265 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2016,2017 MediaTek Inc.
*/
#ifndef __LD_BTMTK_USB_H__
#define __LD_BTMTK_USB_H__
#include "LD_usbbt.h"
/* Memory map for MTK BT */
//#if 0
/* SYS Control */
#define SYSCTL 0x400000
/* WLAN */
#define WLAN 0x410000
/* MCUCTL */
#define CLOCK_CTL 0x0708
#define INT_LEVEL 0x0718
#define COM_REG0 0x0730
#define SEMAPHORE_00 0x07B0
#define SEMAPHORE_01 0x07B4
#define SEMAPHORE_02 0x07B8
#define SEMAPHORE_03 0x07BC
/* Chip definition */
#define CONTROL_TIMEOUT_JIFFIES (300)
#define DEVICE_VENDOR_REQUEST_OUT 0x40
#define DEVICE_VENDOR_REQUEST_IN 0xc0
#define DEVICE_CLASS_REQUEST_OUT 0x20
#define DEVICE_CLASS_REQUEST_IN 0xa0
#define BTUSB_MAX_ISOC_FRAMES 10
#define BTUSB_INTR_RUNNING 0
#define BTUSB_BULK_RUNNING 1
#define BTUSB_ISOC_RUNNING 2
#define BTUSB_SUSPENDING 3
#define BTUSB_DID_ISO_RESUME 4
/* ROM Patch */
#define PATCH_HCI_HEADER_SIZE 4
#define PATCH_WMT_HEADER_SIZE 5
#define PATCH_HEADER_SIZE (PATCH_HCI_HEADER_SIZE + PATCH_WMT_HEADER_SIZE)
#define UPLOAD_PATCH_UNIT 2048
#define PATCH_INFO_SIZE 30
#define PATCH_PHASE1 1
#define PATCH_PHASE2 2
#define PATCH_PHASE3 3
#define PATCH_LEN_ILM (192 * 1024)
#define BUZZARD_CHIP_ID 0x70010200
#define BUZZARD_FLAVOR 0x70010020
#define BUZZARD_FW_VERSION 0x80021004
/**
* 0: patch download is not complete/BT get patch semaphore fail (WiFi get semaphore success)
* 1: patch download is complete
* 2: patch download is not complete/BT get patch semaphore success
*/
#define PATCH_ERR -1
#define PATCH_IS_DOWNLOAD_BY_OTHER 0
#define PATCH_READY 1
#define PATCH_NEED_DOWNLOAD 2
#define MAX_BIN_FILE_NAME_LEN 64
#define LD_BT_MAX_EVENT_SIZE 260
#define BD_ADDR_LEN 6
#define WOBLE_SETTING_FILE_NAME_7961 "woble_setting_7961.bin"
#define WOBLE_SETTING_FILE_NAME_7668 "woble_setting_7668.bin"
#define WOBLE_SETTING_FILE_NAME_7663 "woble_setting_7663.bin"
#define WOBLE_SETTING_FILE_NAME "woble_setting.bin"
#define WOBLE_CFG_NAME_PREFIX "woble_setting"
#define WOBLE_CFG_NAME_SUFFIX "bin"
#define BT_CFG_NAME "bt.cfg"
#define BT_CFG_NAME_PREFIX "bt_mt"
#define BT_CFG_NAME_PREFIX_76XX "bt_"
#define BT_CFG_NAME_SUFFIX "cfg"
#define BT_UNIFY_WOBLE "SUPPORT_UNIFY_WOBLE"
#define BT_UNIFY_WOBLE_TYPE "UNIFY_WOBLE_TYPE"
#define BT_WMT_CMD "WMT_CMD"
#define WMT_CMD_COUNT 255
#define WAKE_DEV_RECORD "wake_on_ble.conf"
#define WAKE_DEV_RECORD_PATH "misc/bluedroid"
#define APCF_SETTING_COUNT 10
#define WOBLE_SETTING_COUNT 10
/* It is for mt7961 download rom patch*/
#define FW_ROM_PATCH_HEADER_SIZE 32
#define FW_ROM_PATCH_GD_SIZE 64
#define FW_ROM_PATCH_SEC_MAP_SIZE 64
#define SEC_MAP_NEED_SEND_SIZE 52
#define PATCH_STATUS 6
#define SECTION_SPEC_NUM 13
/* this for 79XX need download patch staus
* 0:
* patch download is not complete, BT driver need to download patch
* 1:
* patch is downloading by Wifi,BT driver need to retry until status = PATCH_READY
* 2:
* patch download is complete, BT driver no need to download patch
*/
#define BUZZARD_PATCH_ERR -1
#define BUZZARD_PATCH_NEED_DOWNLOAD 0
#define BUZZARD_PATCH_IS_DOWNLOAD_BY_OTHER 1
#define BUZZARD_PATCH_READY 2
/* 0:
* using legacy wmt cmd/evt to download fw patch, usb/sdio just support 0 now
* 1:
* using DMA to download fw patch
*/
#define PATCH_DOWNLOAD_USING_WMT 0
#define PATCH_DOWNLOAD_USING_DMA 1
#define PATCH_DOWNLOAD_PHASE1_2_DELAY_TIME 1
#define PATCH_DOWNLOAD_PHASE1_2_RETRY 5
#define PATCH_DOWNLOAD_PHASE3_DELAY_TIME 20
#define PATCH_DOWNLOAD_PHASE3_RETRY 20
enum {
BTMTK_EP_TYPE_OUT_CMD = 0, /*EP type out for hci cmd and wmt cmd */
BTMTK_EP_TPYE_OUT_ACL, /* EP type out for acl pkt with load rompatch */
};
typedef enum {
TYPE_APCF_CMD,
} woble_setting_type;
enum fw_cfg_index_len {
FW_CFG_INX_LEN_NONE = 0,
FW_CFG_INX_LEN_2 = 2,
FW_CFG_INX_LEN_3 = 3,
};
struct fw_cfg_struct {
u8 *content; /* APCF content or radio off content */
int length; /* APCF content or radio off content of length */
};
struct bt_cfg_struct {
u8 support_unify_woble; /* support unify woble or not */
u8 unify_woble_type; /* 0: legacy. 1: waveform. 2: IR */
struct fw_cfg_struct wmt_cmd[WMT_CMD_COUNT];
};
struct LD_btmtk_usb_data {
mtkbt_dev_t *udev; /* store the usb device informaiton */
unsigned long flags;
int meta_tx;
HC_IF *hcif;
u8 cmdreq_type;
unsigned int sco_num;
int isoc_altsetting;
int suspend_count;
/* request for different io operation */
u8 w_request;
u8 r_request;
/* io buffer for usb control transfer */
unsigned char *io_buf;
unsigned char *fw_image;
unsigned char *fw_header_image;
unsigned char *rom_patch;
unsigned char *rom_patch_header_image;
unsigned char *rom_patch_bin_file_name;
u32 chip_id;
unsigned int flavor;
unsigned int fw_version;
u8 need_load_fw;
u8 need_load_rom_patch;
u32 rom_patch_offset;
u32 rom_patch_len;
u32 fw_len;
int recv_evt_len;
u8 local_addr[BD_ADDR_LEN];
char *woble_setting_file_name;
u8 *setting_file;
u32 setting_file_len;
u8 *wake_dev; /* ADDR:NAP-UAP-LAP, VID/PID:Both Little endian */
u32 wake_dev_len;
struct fw_cfg_struct woble_setting_apcf[WOBLE_SETTING_COUNT];
struct fw_cfg_struct woble_setting_apcf_fill_mac[WOBLE_SETTING_COUNT];
struct fw_cfg_struct woble_setting_apcf_fill_mac_location[WOBLE_SETTING_COUNT];
struct fw_cfg_struct woble_setting_radio_off;
struct fw_cfg_struct woble_setting_wakeup_type;
/* complete event */
struct fw_cfg_struct woble_setting_radio_off_comp_event;
struct bt_cfg_struct bt_cfg;
};
struct _PATCH_HEADER {
u8 ucDateTime[16];
u8 ucPlatform[4];
u16 u2HwVer;
u16 u2SwVer;
u32 u4MagicNum;
};
struct _Global_Descr {
u32 u4PatchVer;
u32 u4SubSys;
u32 u4FeatureOpt;
u32 u4SectionNum;
};
struct _Section_Map {
u32 u4SecType;
u32 u4SecOffset;
u32 u4SecSize;
union {
u32 u4SecSpec[SECTION_SPEC_NUM];
struct {
u32 u4DLAddr;
u32 u4DLSize;
u32 u4SecKeyIdx;
u32 u4AlignLen;
u32 reserved[9];
}bin_info_spec;
};
};
u8 LD_btmtk_usb_getWoBTW(void);
int LD_btmtk_usb_probe(mtkbt_dev_t *dev,int flag);
void LD_btmtk_usb_disconnect(mtkbt_dev_t *dev);
void LD_btmtk_usb_SetWoble(mtkbt_dev_t *dev);
int Ldbtusb_getBtWakeT(struct LD_btmtk_usb_data *data);
#define REV_MT76x2E3 0x0022
#define MT_REV_LT(_data, _chip, _rev) \
is_##_chip(_data) && (((_data)->chip_id & 0x0000ffff) < (_rev))
#define MT_REV_GTE(_data, _chip, _rev) \
is_##_chip(_data) && (((_data)->chip_id & 0x0000ffff) >= (_rev))
/*
* Load code method
*/
enum LOAD_CODE_METHOD {
BIN_FILE_METHOD,
HEADER_METHOD,
};
#endif

View File

@@ -0,0 +1,135 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2005-2007 MediaTek Inc.
*/
#ifndef _GENERIC_ERRNO_H
#define _GENERIC_ERRNO_H
#define EPERM 1 /* Operation not permitted */
#define ENOENT 2 /* No such file or directory */
#define ESRCH 3 /* No such process */
#define EINTR 4 /* Interrupted system call */
#define EIO 5 /* I/O error */
#define ENXIO 6 /* No such device or address */
#define E2BIG 7 /* Argument list too long */
#define ENOEXEC 8 /* Exec format error */
#define EBADF 9 /* Bad file number */
#define ECHILD 10 /* No child processes */
#define EAGAIN 11 /* Try again */
#define ENOMEM 12 /* Out of memory */
#define EACCES 13 /* Permission denied */
#define EFAULT 14 /* Bad address */
#define ENOTBLK 15 /* Block device required */
#define EBUSY 16 /* Device or resource busy */
#define EEXIST 17 /* File exists */
#define EXDEV 18 /* Cross-device link */
#define ENODEV 19 /* No such device */
#define ENOTDIR 20 /* Not a directory */
#define EISDIR 21 /* Is a directory */
#define EINVAL 22 /* Invalid argument */
#define ENFILE 23 /* File table overflow */
#define EMFILE 24 /* Too many open files */
#define ENOTTY 25 /* Not a typewriter */
#define ETXTBSY 26 /* Text file busy */
#define EFBIG 27 /* File too large */
#define ENOSPC 28 /* No space left on device */
#define ESPIPE 29 /* Illegal seek */
#define EROFS 30 /* Read-only file system */
#define EMLINK 31 /* Too many links */
#define EPIPE 32 /* Broken pipe */
#define EDOM 33 /* Math argument out of domain of func */
#define ERANGE 34 /* Math result not representable */
#define EDEADLK 35 /* Resource deadlock would occur */
#define ENAMETOOLONG 36 /* File name too long */
#define ENOLCK 37 /* No record locks available */
#define ENOSYS 38 /* Function not implemented */
#define ENOTEMPTY 39 /* Directory not empty */
#define ELOOP 40 /* Too many symbolic links encountered */
#define EWOULDBLOCK EAGAIN /* Operation would block */
#define ENOMSG 42 /* No message of desired type */
#define EIDRM 43 /* Identifier removed */
#define ECHRNG 44 /* Channel number out of range */
#define EL2NSYNC 45 /* Level 2 not synchronized */
#define EL3HLT 46 /* Level 3 halted */
#define EL3RST 47 /* Level 3 reset */
#define ELNRNG 48 /* Link number out of range */
#define EUNATCH 49 /* Protocol driver not attached */
#define ENOCSI 50 /* No CSI structure available */
#define EL2HLT 51 /* Level 2 halted */
#define EBADE 52 /* Invalid exchange */
#define EBADR 53 /* Invalid request descriptor */
#define EXFULL 54 /* Exchange full */
#define ENOANO 55 /* No anode */
#define EBADRQC 56 /* Invalid request code */
#define EBADSLT 57 /* Invalid slot */
#define EDEADLOCK EDEADLK
#define EBFONT 59 /* Bad font file format */
#define ENOSTR 60 /* Device not a stream */
#define ENODATA 61 /* No data available */
#define ETIME 62 /* Timer expired */
#define ENOSR 63 /* Out of streams resources */
#define ENONET 64 /* Machine is not on the network */
#define ENOPKG 65 /* Package not installed */
#define EREMOTE 66 /* Object is remote */
#define ENOLINK 67 /* Link has been severed */
#define EADV 68 /* Advertise error */
#define ESRMNT 69 /* Srmount error */
#define ECOMM 70 /* Communication error on send */
#define EPROTO 71 /* Protocol error */
#define EMULTIHOP 72 /* Multihop attempted */
#define EDOTDOT 73 /* RFS specific error */
#define EBADMSG 74 /* Not a data message */
#define EOVERFLOW 75 /* Value too large for defined data type */
#define ENOTUNIQ 76 /* Name not unique on network */
#define EBADFD 77 /* File descriptor in bad state */
#define EREMCHG 78 /* Remote address changed */
#define ELIBACC 79 /* Can not access a needed shared library */
#define ELIBBAD 80 /* Accessing a corrupted shared library */
#define ELIBSCN 81 /* .lib section in a.out corrupted */
#define ELIBMAX 82 /* Attempting to link in too many shared libraries */
#define ELIBEXEC 83 /* Cannot exec a shared library directly */
#define EILSEQ 84 /* Illegal byte sequence */
#define ERESTART 85 /* Interrupted system call should be restarted */
#define ESTRPIPE 86 /* Streams pipe error */
#define EUSERS 87 /* Too many users */
#define ENOTSOCK 88 /* Socket operation on non-socket */
#define EDESTADDRREQ 89 /* Destination address required */
#define EMSGSIZE 90 /* Message too long */
#define EPROTOTYPE 91 /* Protocol wrong type for socket */
#define ENOPROTOOPT 92 /* Protocol not available */
#define EPROTONOSUPPORT 93 /* Protocol not supported */
#define ESOCKTNOSUPPORT 94 /* Socket type not supported */
#define EOPNOTSUPP 95 /* Operation not supported on transport endpoint */
#define EPFNOSUPPORT 96 /* Protocol family not supported */
#define EAFNOSUPPORT 97 /* Address family not supported by protocol */
#define EADDRINUSE 98 /* Address already in use */
#define EADDRNOTAVAIL 99 /* Cannot assign requested address */
#define ENETDOWN 100 /* Network is down */
#define ENETUNREACH 101 /* Network is unreachable */
#define ENETRESET 102 /* Network dropped connection because of reset */
#define ECONNABORTED 103 /* Software caused connection abort */
#define ECONNRESET 104 /* Connection reset by peer */
#define ENOBUFS 105 /* No buffer space available */
#define EISCONN 106 /* Transport endpoint is already connected */
#define ENOTCONN 107 /* Transport endpoint is not connected */
#define ESHUTDOWN 108 /* Cannot send after transport endpoint shutdown */
#define ETOOMANYREFS 109 /* Too many references: cannot splice */
#define ETIMEDOUT 110 /* Connection timed out */
#define ECONNREFUSED 111 /* Connection refused */
#define EHOSTDOWN 112 /* Host is down */
#define EHOSTUNREACH 113 /* No route to host */
#define EALREADY 114 /* Operation already in progress */
#define EINPROGRESS 115 /* Operation now in progress */
#define ESTALE 116 /* Stale NFS file handle */
#define EUCLEAN 117 /* Structure needs cleaning */
#define ENOTNAM 118 /* Not a XENIX named type file */
#define ENAVAIL 119 /* No XENIX semaphores available */
#define EISNAM 120 /* Is a named type file */
#define EREMOTEIO 121 /* Remote I/O error */
#define EDQUOT 122 /* Quota exceeded */
#define ENOMEDIUM 123 /* No medium found */
#define EMEDIUMTYPE 124 /* Wrong medium type */
#endif

View File

@@ -0,0 +1,473 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2018 MediaTek Inc.
*/
//#include <command.h>
//#include <common.h>
//#include <ShareType.h>
//#include <CusConfig.h>
//#include <MsVfs.h>
//#include <MsDebug.h>
//#include "usb_def.h"
//#include <MsSystem.h>
#include <stdio.h>
#include "LD_usbbt.h"
#include "LD_btmtk_usb.h"
//#include "hal_usb.h"
usb_vid_pid array_mtk_vid_pid[] = {
{0x0E8D, 0x7668, "MTK7668"}, // 7668
{0x0E8D, 0x76A0, "MTK7662T"}, // 7662T
{0x0E8D, 0x76A1, "MTK7632T"}, // 7632T
{0x0E8D, 0x7663, "MTK7663"}, //7663
{0x0E8D, 0x7961, "MTK7961"}, //7961
{0x0E8D, 0x7902, "MTK7902"}, //7902
};
int max_mtk_wifi_id = (sizeof(array_mtk_vid_pid) / sizeof(array_mtk_vid_pid[0]));
usb_vid_pid *pmtk_wifi = &array_mtk_vid_pid[0];
static mtkbt_dev_t *g_DrvData = NULL;
extern void LDR_Mount(void);
extern UINT32 FAT_getsize(const char* filename);
extern UINT8 FAT_Read(const char* filename, char *buffer,UINT32 filesize);
VOID *os_memcpy(VOID *dst, const VOID *src, UINT32 len)
{
return x_memcpy(dst, src, len);
}
VOID *os_memmove(VOID *dest, const void *src,UINT32 len)
{
return x_memcpy(dest, src, len);
}
VOID *os_memset(VOID *s, int c, size_t n)
{
return x_memset(s,c,n);
}
VOID *os_kzalloc(size_t size, unsigned int flags)
{
VOID *ptr = x_mem_alloc(size);
os_memset(ptr, 0, size);
return ptr;
}
void LD_load_code_from_bin(unsigned char **image, char *bin_name, char *path, mtkbt_dev_t *dev, u32 *code_len)
{
int size;
size = FAT_getsize(bin_name);
if (size == -1){
usb_debug("Get file size fail\n");
return;
}
*code_len = size;
*image = x_mem_alloc(size);
FAT_Read(bin_name, (char *)(*image),size);
return;
}
static int usb_bt_bulk_msg(
mtkbt_dev_t *dev,
u32 epType,
u8 *data,
int size,
int* realsize,
int timeout /* not used */
)
{
int ret =0 ;
if(dev == NULL || dev->idev == NULL || dev->bulk_tx_ep == NULL)
{
usb_debug("bulk out error 00\n");
return -1;
}
//usb_debug("[usb_bt_bulk_msg]ep_addr:%x\n", dev->bulk_tx_ep->bEndpointAddress);
//usb_debug("[usb_bt_bulk_msg]ep_maxpkt:%x\n", dev->bulk_tx_ep->wMaxPacketSize);
if(epType == MTKBT_BULK_TX_EP)
{
ret = dev->idev->controller->bulk(dev->bulk_tx_ep, size, data,0);
*realsize = ret;
if(ret<0)
{
usb_debug("bulk out error 01\n");
return -1;
}
if(*realsize == size)
{
//usb_debug("bulk out success 01,size =0x%x\n",size);
return 0;
}
else
{
usb_debug("bulk out fail 02,size =0x%x,realsize =0x%x\n",size,*realsize);
}
}
return -1;
}
static int usb_bt_control_msg(
mtkbt_dev_t *dev,
u32 epType,
u8 request,
u8 requesttype,
u16 value,
u16 index,
u8 *data,
int data_length,
int timeout /* not used */
)
{
int ret = -1;
dev_req_t dr;
dr.bmRequestType = requesttype;
dr.bRequest = request;
dr.wValue = value;
dr.wIndex = index;
dr.wLength = data_length;
if(epType == MTKBT_CTRL_TX_EP)
{
ret = dev->idev->controller->control(dev->idev, OUT, sizeof (dr), &dr, data_length, data);
}
else if (epType == MTKBT_CTRL_RX_EP)
{
ret = dev->idev->controller->control(dev->idev, IN, sizeof (dr), &dr, data_length, data);
}
else
{
usb_debug("control message wrong Type =0x%x\n",epType);
}
if (ret < 0)
{
usb_debug("Err1(%d)\n", ret);
return ret;
}
return ret;
}
static int usb_bt_interrupt_msg(
mtkbt_dev_t *dev,
u32 epType,
u8 *data,
int size,
int* realsize,
int timeout /* unit of 1ms */
)
{
int ret = -1;
usb_debug("epType = 0x%x\n",epType);
if(epType == MTKBT_INTR_EP)
{
ret = dev->idev->controller->intr(dev->intr_ep, size, realsize,data, 2000);
}
usb_debug("realsize=%d reqdata 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x\n",*realsize,data[0],data[1],data[2],data[3],data[4],data[5]);
if(ret < 0 )
{
usb_debug("Err1(%d)\n", ret);
return ret;
}
//usb_debug("ret = 0x%x\n",ret);
return ret;
}
static HC_IF usbbt_host_interface =
{
usb_bt_bulk_msg,
usb_bt_control_msg,
usb_bt_interrupt_msg,
};
static void Ldbtusb_diconnect (btusbdev_t *dev)
{
LD_btmtk_usb_disconnect(g_DrvData);
if(g_DrvData)
{
os_kfree(g_DrvData);
}
g_DrvData = NULL;
}
static int Ldbtusb_SetWoble(btusbdev_t *dev)
{
if(!g_DrvData)
{
usb_debug("usb set woble fail ,because no drv data\n");
return -1;
}
else
{
LD_btmtk_usb_SetWoble(g_DrvData);
usb_debug("usb set woble end\n");
}
return 0;
}
int Ldbtusb_connect (btusbdev_t *dev, int flag)
{
int ret = 0;
// For Mstar
//struct usb_endpoint_descriptor *ep_desc;
//struct usb_interface *iface;
int i;
//iface = &dev->config.if_desc[0];
if(g_DrvData == NULL)
{
g_DrvData = os_kmalloc(sizeof(mtkbt_dev_t),MTK_GFP_ATOMIC);
if(!g_DrvData)
{
usb_debug("Not enough memory for mtkbt virtual usb device.\n");
return -1;
}
else
{
os_memset(g_DrvData,0,sizeof(mtkbt_dev_t));
g_DrvData->idev = dev;
g_DrvData->connect = Ldbtusb_connect;
g_DrvData->disconnect = Ldbtusb_diconnect;
g_DrvData->SetWoble = Ldbtusb_SetWoble;
}
}
else
{
return -1;
}
for (i = 1; i <= dev->num_endp; i++) {
usb_debug("dev->endpoints[%d].type = %d\n", i, dev->endpoints[i].type);
usb_debug("dev->endpoints[%d].endpoint = %d\n", i, dev->endpoints[i].endpoint);
usb_debug("dev->endpoints[%d].direction = %d\n", i, dev->endpoints[i].direction);
if (dev->endpoints[i].type == BULK)
{
if (dev->endpoints[i].direction == IN)
{
g_DrvData->bulk_rx_ep = &dev->endpoints[i];
}
else if (dev->endpoints[i].direction == OUT &&
dev->endpoints[i].endpoint != 0x01)
{
g_DrvData->bulk_tx_ep = &dev->endpoints[i];
}
continue;
}
if (dev->endpoints[i].type == INTERRUPT &&
dev->endpoints[i].endpoint != 0x8f)
{
g_DrvData->intr_ep = &dev->endpoints[i];
continue;
}
}
if (!g_DrvData->intr_ep || !g_DrvData->bulk_tx_ep || !g_DrvData->bulk_rx_ep)
{
os_kfree(g_DrvData);
g_DrvData = NULL;
usb_debug("btmtk_usb_probe end Error 3\n");
return -1;
}
/* Init HostController interface */
g_DrvData->hci_if = &usbbt_host_interface;
/* btmtk init */
ret = LD_btmtk_usb_probe(g_DrvData, flag);
if (ret != 0)
{
usb_debug("usb probe fail\n");
if(g_DrvData)
{
os_kfree(g_DrvData);
}
g_DrvData = NULL;
return -1;
}
else
{
usb_debug("usbbt probe success\n");
}
return ret;
}
u8 LDbtusb_getWoBTW(void)
{
return LD_btmtk_usb_getWoBTW();
}
#if 0
static int checkUsbDevicePort(struct usb_device* udev, u16 vendorID, u16 productID, u8 port)
{
struct usb_device* pdev = NULL;
/*#if defined (CONFIG_USB_PREINIT)
usb_stop(port);
if (usb_post_init(port) == 0)
#else
if (usb_init(port) == 0)
#endif*/
#if 0
{
/* get device */
//pdev = usb_get_dev_index(0);
if ((pdev != NULL) && (pdev->descriptor.idVendor == vendorID) && (pdev->descriptor.idProduct == productID)) // MTK 7662
{
Printf("OK\n");
x_memcpy(udev, pdev, sizeof(struct usb_device));
return 0 ;
}
}
#endif
return -1;
}
#endif
#if 0
static int findUsbDevice(struct usb_device* udev)
{
int ret = -1;
u8 idx = 0;
u8 i = 0;
char portNumStr[10] = "\0";
char* pBTUsbPort = NULL;
Printf("IN\n");
if(udev == NULL)
{
Printf("udev can not be NULL\n");
return -1;
}
//use the usb poll function replace----lining
//keys add:find usb port idx
/*#define BT_USB_PORT "bt_usb_port"
pBTUsbPort = getenv(BT_USB_PORT);
if(pBTUsbPort != NULL)
{
i = 0;
// search mtk bt usb port
idx = atoi(pBTUsbPort);
usb_debug("find mtk bt usb device from usb prot[%d]\n", idx);
while (i < max_mtk_wifi_id) {
ret = checkUsbDevicePort(udev, (pmtk_wifi + i)->vid, (pmtk_wifi + i)->pid, idx);
if (ret == 0) break;
i++;
#if defined(NEW_RC_CON) && (NEW_RC_CON == TRUE)
usb_debug("fengchen 7668 error");
return -1;
#endif
}
if(ret == 0)
{
return 0;
}
}*/
//keys add:find usb port idx end!!!
// not find mt bt usb device from given usb port, so poll every usb port.
/*#if defined(ENABLE_FIFTH_EHC)
const char u8UsbPortCount = 5;
#elif defined(ENABLE_FOURTH_EHC)
const char u8UsbPortCount = 4;
#elif defined(ENABLE_THIRD_EHC)
const char u8UsbPortCount = 3;
#elif defined(ENABLE_SECOND_EHC)
const char u8UsbPortCount = 2;
#else
const char u8UsbPortCount = 1;
#endif
for(idx = 0; idx < u8UsbPortCount; idx++)
{
i = 0;
while (i < max_mtk_wifi_id) {
ret = checkUsbDevicePort(udev, (pmtk_wifi + i)->vid, (pmtk_wifi + i)->pid, idx);
if (ret == 0) break;
i++;
}
if(ret == 0)
{
// set bt_usb_port to store mt bt usb device port
snprintf(portNumStr, sizeof(portNumStr), "%d", idx);
setenv(BT_USB_PORT, portNumStr);
saveenv();
return 0;
}
}
if(pBTUsbPort != NULL)
{
// env BT_USB_PORT is involid, so delete it
setenv(BT_USB_PORT, NULL);
saveenv();
}*/
Printf("Not find usb device\n");
return -1;
}
#endif
void do_setMtkBT(usbdev_t *dev)
{
int ret = 0;
Printf("IN\n");
LDR_Mount(); //16566
// MTK USB controller
/*ret = findUsbDevice(&udev);
if (ret != 0)
{
Printf("find bt usb device failed\n");
return -1;
}*/
ret = Ldbtusb_connect(dev,0);
if(ret != 0){
Printf("connect to bt usb device failed\n");
return;
}
ret = Ldbtusb_SetWoble(NULL);
if(ret != 0)
{
Printf("set bt usb device woble cmd failed\n");
return;
}
Printf("OK\n");
}
int getMtkBTWakeT(void)
{
int ret = 0;
#if 0
struct usb_device udev;
memset(&udev, 0, sizeof(struct usb_device));
Printf("IN\n");
// MTK USB controller
ret = findUsbDevice(&udev);
if (ret != 0)
{
Printf("find bt usb device failed\n");
return -1;
}
ret = Ldbtusb_connect(&udev, 1);
if(ret != 0)
{
Printf("connect to bt usb device failed\n");
return -1;
}
if(ret != 0)
{
Printf("set bt usb device woble cmd failed\n");
return -1;
}
Printf("OK\n");
#endif
return ret;
}

View File

@@ -0,0 +1,114 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2008-2010 MediaTek Inc.
*/
#ifndef __LD_USBBT_H__
#define __LD_USBBT_H__
//#include <common.h>
//#include "usb_def.h"
//#include <MsTypes.h>
#include "types.h"
#include "loader_if.h"
#include "usb_type.h"
struct _usb_vid_pid_
{
unsigned short vid;
unsigned short pid;
char name[10];
};
typedef struct _usb_vid_pid_ usb_vid_pid;
#define MTKBT_CTRL_TX_EP 0
#define MTKBT_CTRL_RX_EP 1
#define MTKBT_INTR_EP 2
#define MTKBT_BULK_TX_EP 3
#define MTKBT_BULK_RX_EP 4
#define MTK_GFP_ATOMIC 1
#define CRC_CHECK 0
#define BTLDER "[BT-LOADER] "
#define USB_TYPE_STANDARD (0x00 << 5)
#define USB_TYPE_CLASS (0x01 << 5)
#define USB_TYPE_VENDOR (0x02 << 5)
#define USB_TYPE_RESERVED (0x03 << 5)
#define usb_debug(fmt,...) Printf("%s: "fmt, __func__, ##__VA_ARGS__)
#define usb_debug_raw(p, l, fmt, ...) \
do { \
int raw_count = 0; \
const unsigned char *ptr = p; \
Printf("%s: "fmt, __func__, ##__VA_ARGS__); \
for (raw_count = 0; raw_count < l; ++raw_count) \
Printf(" %02X", ptr[raw_count]); \
Printf("\n"); \
} while (0)
#define os_kmalloc(size,flags) x_mem_alloc(size)
#define os_kfree(ptr) x_mem_free(ptr)
#define MTK_UDELAY(x) HAL_Delay_us(x)
#define MTK_MDELAY(x) HAL_Delay_us(x*1000)
//#define btusbdev_t struct usb_interface
#define btusbdev_t struct usbdev
#undef NULL
#define NULL ((void *)0)
#define s32 signed int
#ifndef TRUE
#define TRUE 1
#endif
#ifndef FALSE
#define FALSE 0
#endif
typedef unsigned int UINT32;
typedef signed int INT32;
typedef unsigned char UINT8;
typedef unsigned long ULONG;
typedef unsigned char BOOL;
typedef struct __USBBT_DEVICE__ mtkbt_dev_t;
typedef struct {
int (*usb_bulk_msg) (mtkbt_dev_t *dev, u32 epType, u8 *data, int size, int* realsize, int timeout);
int (*usb_control_msg) (mtkbt_dev_t *dev, u32 epType, u8 request, u8 requesttype, u16 value, u16 index,
u8 *data, int data_length, int timeout);
int (*usb_interrupt_msg)(mtkbt_dev_t *dev, u32 epType, u8 *data, int size, int* realsize, int timeout);
} HC_IF;
struct __USBBT_DEVICE__
{
void *priv_data;
btusbdev_t* intf;
struct usbdev *idev;
endpoint_t *intr_ep;
endpoint_t *bulk_tx_ep;
endpoint_t *bulk_rx_ep;
endpoint_t *isoc_tx_ep;
endpoint_t *isoc_rx_ep;
HC_IF *hci_if;
int (*connect)(btusbdev_t *dev, int flag);
void (*disconnect)(btusbdev_t *dev);
int (*SetWoble)(btusbdev_t *dev);
};//mtkbt_dev_t;
#define BT_INST(dev) (dev)
u8 LDbtusb_getWoBTW(void);
int Ldbtusb_connect (btusbdev_t *dev,int falg);
VOID *os_memcpy(VOID *dst, const VOID *src, UINT32 len);
VOID *os_memmove(VOID *dest, const void *src,UINT32 len);
VOID *os_memset(VOID *s, int c, size_t n);
VOID *os_kzalloc(size_t size, unsigned int flags);
void LD_load_code_from_bin(unsigned char **image, char *bin_name, char *path, mtkbt_dev_t *dev,u32 *code_len);
void do_setMtkBT(usbdev_t *dev);
int getMtkBTWakeT(void);
#endif

View File

@@ -0,0 +1,83 @@
###############################################################################
###########################################################################
# $RCSfile: Makefile,v $
# $Revision: #2 $
# $Date: 2009/04/08 $
# $Author: allen.kao $
#
# Description:
# Leave-level makefile to build the subcomponent of driver library.
#
# Specify the source files to be compile in SRC.
#############################################################################
THIS_COMPONENT = usb3
ifeq "$(UBOOT_LIBRARY)" "y"
include $(TOPDIR)/config.mk
CPPFLAGS += -I$(TOPDIR)/board/$(BOARDDIR)/drv_lib/drv_inc -I$(TOPDIR)/board/$(BOARDDIR)/drv_lib/inc
CFLAGS += -I$(TOPDIR)/board/$(BOARDDIR)/drv_lib/drv_inc -I$(TOPDIR)/board/$(BOARDDIR)/drv_lib/inc
CPPFLAGS += -I$(TOPDIR)/board/$(BOARDDIR)/include -I$(OSAI_INC)
CFLAGS += -I$(TOPDIR)/board/$(BOARDDIR)/include -I$(OSAI_INC)
LIB = lib$(THIS_COMPONENT).a
OBJS := LD_btmtk_usb.o LD_usbbt.o
$(LIB): $(OBJS) $(SOBJS)
$(AR) crv $@ $^
clean:
rm -f $(SOBJS) $(OBJS)
distclean: clean
rm -f $(LIB) core *.bak .depend
#########################################################################
.depend: Makefile $(SOBJS:.o=.S) $(OBJS:.o=.c)
$(CC) -M $(CPPFLAGS) $(SOBJS:.o=.S) $(OBJS:.o=.c) > $@
-include .depend
else # UBOOT_LIBRARY
ifdef LINUX_DRV_ROOT
export DRV_ROOT = $(LINUX_DRV_ROOT)
else
export DRV_ROOT = $(TARGET_OPEN_ROOT)
endif
SRC = LD_btmtk_usb.c LD_usbbt.c
ifeq "$(USRDRV)" "true"
ifeq "$(BUILD_LINUX_LOADER)" ""
SRC += kcu_graphic.c
endif
endif
OBJ =
SUB_COMPONENTS =
OPTIONAL_SUB_COMPONENTS =
DEFINES +=
CC_INC += -I$(KERNEL_ROOT)/$(KERNEL_VER)/include -I$(DRV_ROOT)/usb3/libpayload_usb/
#############################################################################
#
# Include the makefile common to all components
#
include $(DRV_ROOT)/driver.mak
endif # UBOOT_LIBRARY

View File

@@ -0,0 +1,268 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2019 MediaTek Inc.
*/
#ifndef __LD_BTMTK_USB_H__
#define __LD_BTMTK_USB_H__
#include <mtk-bt/LD_usbbt.h>
/* Memory map for MTK BT */
//#if 0
/* SYS Control */
#define SYSCTL 0x400000
/* WLAN */
#define WLAN 0x410000
/* MCUCTL */
#define CLOCK_CTL 0x0708
#define INT_LEVEL 0x0718
#define COM_REG0 0x0730
#define SEMAPHORE_00 0x07B0
#define SEMAPHORE_01 0x07B4
#define SEMAPHORE_02 0x07B8
#define SEMAPHORE_03 0x07BC
/* Chip definition */
#define CONTROL_TIMEOUT_JIFFIES (300)
#define DEVICE_VENDOR_REQUEST_OUT 0x40
#define DEVICE_VENDOR_REQUEST_IN 0xc0
#define DEVICE_CLASS_REQUEST_OUT 0x20
#define DEVICE_CLASS_REQUEST_IN 0xa0
#define BTUSB_MAX_ISOC_FRAMES 10
#define BTUSB_INTR_RUNNING 0
#define BTUSB_BULK_RUNNING 1
#define BTUSB_ISOC_RUNNING 2
#define BTUSB_SUSPENDING 3
#define BTUSB_DID_ISO_RESUME 4
/* ROM Patch */
#define PATCH_HCI_HEADER_SIZE_BULK_EP 4
#define PATCH_HCI_HEADER_SIZE_CTRL_EP 3
#define PATCH_WMT_HEADER_SIZE 5
#define PATCH_HEADER_SIZE_BULK_EP (PATCH_WMT_HEADER_SIZE + PATCH_HCI_HEADER_SIZE_BULK_EP)
#define PATCH_HEADER_SIZE_CTRL_EP (PATCH_WMT_HEADER_SIZE + PATCH_HCI_HEADER_SIZE_CTRL_EP)
#define UPLOAD_PATCH_UNIT 512
#define PATCH_INFO_SIZE 30
#define PATCH_PHASE1 1
#define PATCH_PHASE2 2
#define PATCH_PHASE3 3
#define PATCH_LEN_ILM (192 * 1024)
#define BUZZARD_CHIP_ID 0x70010200
#define BUZZARD_FLAVOR 0x70010020
#define BUZZARD_FW_VERSION 0x80021004
/**
* 0: patch download is not complete/BT get patch semaphore fail (WiFi get semaphore success)
* 1: patch download is complete
* 2: patch download is not complete/BT get patch semaphore success
*/
#define PATCH_ERR -1
#define PATCH_IS_DOWNLOAD_BY_OTHER 0
#define PATCH_READY 1
#define PATCH_NEED_DOWNLOAD 2
#define MAX_BIN_FILE_NAME_LEN 64
#define LD_BT_MAX_EVENT_SIZE 260
#define BD_ADDR_LEN 6
#define WOBLE_SETTING_FILE_NAME_7961 "woble_setting_7961.bin"
#define WOBLE_SETTING_FILE_NAME_7668 "woble_setting_7668.bin"
#define WOBLE_SETTING_FILE_NAME_7663 "woble_setting_7663.bin"
#define WOBLE_SETTING_FILE_NAME "woble_setting.bin"
#define BT_CFG_NAME "bt.cfg"
#define BT_CFG_NAME_PREFIX "bt_mt"
#define BT_CFG_NAME_SUFFIX "cfg"
#define BT_UNIFY_WOBLE "SUPPORT_UNIFY_WOBLE"
#define BT_UNIFY_WOBLE_TYPE "UNIFY_WOBLE_TYPE"
#define BT_WMT_CMD "WMT_CMD"
#define WMT_CMD_COUNT 255
#define WAKE_DEV_RECORD "wake_on_ble.conf"
#define WAKE_DEV_RECORD_PATH "misc/bluedroid"
#define APCF_SETTING_COUNT 10
#define WOBLE_SETTING_COUNT 10
/* It is for mt7961 download rom patch*/
#define FW_ROM_PATCH_HEADER_SIZE 32
#define FW_ROM_PATCH_GD_SIZE 64
#define FW_ROM_PATCH_SEC_MAP_SIZE 64
#define SEC_MAP_NEED_SEND_SIZE 52
#define PATCH_STATUS 6
#define SECTION_SPEC_NUM 13
#define WMT_HEADER_LEN 4
#define LOAD_PATCH_PHASE_LEN 1
/* this for 79XX need download patch staus
* 0:
* patch download is not complete, BT driver need to download patch
* 1:
* patch is downloading by Wifi,BT driver need to retry until status = PATCH_READY
* 2:
* patch download is complete, BT driver no need to download patch
*/
#define BUZZARD_PATCH_ERR -1
#define BUZZARD_PATCH_NEED_DOWNLOAD 0
#define BUZZARD_PATCH_IS_DOWNLOAD_BY_OTHER 1
#define BUZZARD_PATCH_READY 2
/* 0:
* using legacy wmt cmd/evt to download fw patch, usb/sdio just support 0 now
* 1:
* using DMA to download fw patch
*/
#define PATCH_DOWNLOAD_USING_WMT 0
#define PATCH_DOWNLOAD_USING_DMA 1
#define PATCH_DOWNLOAD_CMD_DELAY_TIME 5
#define PATCH_DOWNLOAD_CMD_RETRY 0
#define PATCH_DOWNLOAD_PHASE1_2_DELAY_TIME 1
#define PATCH_DOWNLOAD_PHASE1_2_RETRY 5
#define PATCH_DOWNLOAD_PHASE3_DELAY_TIME 20
#define PATCH_DOWNLOAD_PHASE3_RETRY 20
#define PM_SOURCE_DISABLE (0xFF)
enum {
BTMTK_EP_TYPE_OUT_CMD = 0, /*EP type out for hci cmd and wmt cmd */
BTMTK_EP_TPYE_OUT_ACL, /* EP type out for acl pkt with load rompatch */
};
typedef enum {
TYPE_APCF_CMD,
} woble_setting_type;
enum fw_cfg_index_len {
FW_CFG_INX_LEN_NONE = 0,
FW_CFG_INX_LEN_2 = 2,
FW_CFG_INX_LEN_3 = 3,
};
struct fw_cfg_struct {
u8 *content; /* APCF content or radio off content */
int length; /* APCF content or radio off content of length */
};
#define UNIFY_WOBLE_LEGACY 0
#define UNIFY_WOBLE_WAVEFORM 1
struct bt_cfg_struct {
u8 support_unify_woble; /* support unify woble or not */
u8 unify_woble_type; /* 0: legacy. 1: waveform. 2: IR */
struct fw_cfg_struct wmt_cmd[WMT_CMD_COUNT];
};
struct LD_btmtk_usb_data {
mtkbt_dev_t *udev; /* store the usb device informaiton */
unsigned long flags;
int meta_tx;
HC_IF *hcif;
u8 cmdreq_type;
unsigned int sco_num;
int isoc_altsetting;
int suspend_count;
/* request for different io operation */
u8 w_request;
u8 r_request;
/* io buffer for usb control transfer */
unsigned char *io_buf;
unsigned char *fw_image;
unsigned char *fw_header_image;
unsigned char *fw_bin_file_name;
unsigned char *rom_patch;
unsigned char *rom_patch_header_image;
unsigned char *rom_patch_bin_file_name;
u32 chip_id;
unsigned int flavor;
unsigned int fw_version;
u8 need_load_fw;
u8 need_load_rom_patch;
u32 rom_patch_offset;
u32 rom_patch_len;
u32 fw_len;
int recv_evt_len;
u8 local_addr[BD_ADDR_LEN];
char *woble_setting_file_name;
u8 *setting_file;
u32 setting_file_len;
u8 *wake_dev; /* ADDR:NAP-UAP-LAP, VID/PID:Both Little endian */
u32 wake_dev_len;
struct fw_cfg_struct woble_setting_apcf[WOBLE_SETTING_COUNT];
struct fw_cfg_struct woble_setting_apcf_fill_mac[WOBLE_SETTING_COUNT];
struct fw_cfg_struct woble_setting_apcf_fill_mac_location[WOBLE_SETTING_COUNT];
struct fw_cfg_struct woble_setting_radio_off;
struct fw_cfg_struct woble_setting_wakeup_type;
/* complete event */
struct fw_cfg_struct woble_setting_radio_off_comp_event;
struct bt_cfg_struct bt_cfg;
};
struct _PATCH_HEADER {
u8 ucDateTime[16];
u8 ucPlatform[4];
u16 u2HwVer;
u16 u2SwVer;
u32 u4MagicNum;
};
struct _Global_Descr {
u32 u4PatchVer;
u32 u4SubSys;
u32 u4FeatureOpt;
u32 u4SectionNum;
};
struct _Section_Map {
u32 u4SecType;
u32 u4SecOffset;
u32 u4SecSize;
union {
u32 u4SecSpec[SECTION_SPEC_NUM];
struct {
u32 u4DLAddr;
u32 u4DLSize;
u32 u4SecKeyIdx;
u32 u4AlignLen;
u32 reserved[9];
}bin_info_spec;
};
};
u8 LD_btmtk_usb_getWoBTW(void);
int LD_btmtk_usb_probe(mtkbt_dev_t *dev, int flag);
void LD_btmtk_usb_disconnect(mtkbt_dev_t *dev);
void LD_btmtk_usb_SetWoble(mtkbt_dev_t *dev);
int Ldbtusb_getBtWakeT(struct LD_btmtk_usb_data *data);
#define REV_MT76x2E3 0x0022
#define MT_REV_LT(_data, _chip, _rev) \
is_##_chip(_data) && (((_data)->chip_id & 0x0000ffff) < (_rev))
#define MT_REV_GTE(_data, _chip, _rev) \
is_##_chip(_data) && (((_data)->chip_id & 0x0000ffff) >= (_rev))
/*
* Load code method
*/
enum LOAD_CODE_METHOD {
BIN_FILE_METHOD,
HEADER_METHOD,
};
#endif

View File

@@ -0,0 +1,110 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2019 MediaTek Inc.
*/
#ifndef __LD_USBBT_H__
#define __LD_USBBT_H__
#include <common.h>
#include <malloc.h>
#include <usb.h>
#include <MsTypes.h>
#define MTKBT_CTRL_TX_EP 0
#define MTKBT_CTRL_RX_EP 1
#define MTKBT_INTR_EP 2
#define MTKBT_BULK_TX_EP 3
#define MTKBT_BULK_RX_EP 4
#define USB_INTR_MSG_TIMO 2000
#define MTK_GFP_ATOMIC 1
#define CRC_CHECK 0
#define BT_USB_PORT "bt_usb_port"
#define BTLDER "[BT-LOADER] "
#define USB_TYPE_STANDARD (0x00 << 5)
#define USB_TYPE_CLASS (0x01 << 5)
#define USB_TYPE_VENDOR (0x02 << 5)
#define USB_TYPE_RESERVED (0x03 << 5)
#define usb_debug(fmt,...) printf("%s: "fmt, __func__, ##__VA_ARGS__)
#define usb_debug_raw(p, l, fmt, ...) \
do { \
int raw_count = 0; \
const unsigned char *ptr = p; \
printf("%s: "fmt, __func__, ##__VA_ARGS__); \
for (raw_count = 0; raw_count < l; ++raw_count) \
printf(" %02X", ptr[raw_count]); \
printf("\n"); \
} while (0)
#define os_kmalloc(size,flags) malloc(size)
#define os_kfree(ptr) free(ptr)
#define MTK_UDELAY(x) udelay(x)
#define MTK_MDELAY(x) mdelay(x)
//#define btusbdev_t struct usb_interface
#define btusbdev_t struct usb_device
#undef NULL
#define NULL ((void *)0)
#define s32 signed int
#ifndef TRUE
#define TRUE 1
#endif
#ifndef FALSE
#define FALSE 0
#endif
typedef unsigned int UINT32;
typedef signed int INT32;
typedef unsigned char UINT8;
typedef unsigned long ULONG;
typedef unsigned char BOOL;
typedef struct __USBBT_DEVICE__ mtkbt_dev_t;
typedef struct {
int (*usb_bulk_msg) (mtkbt_dev_t *dev, u32 epType, u8 *data, int size, int* realsize, int timeout);
int (*usb_control_msg) (mtkbt_dev_t *dev, u32 epType, u8 request, u8 requesttype, u16 value, u16 index,
u8 *data, int data_length, int timeout);
int (*usb_interrupt_msg)(mtkbt_dev_t *dev, u32 epType, u8 *data, int size, int* realsize, int timeout);
} HC_IF;
struct __USBBT_DEVICE__
{
void *priv_data;
btusbdev_t* intf;
struct usb_device *udev;
struct usb_endpoint_descriptor *intr_ep;
struct usb_endpoint_descriptor *bulk_tx_ep;
struct usb_endpoint_descriptor *bulk_rx_ep;
struct usb_endpoint_descriptor *isoc_tx_ep;
struct usb_endpoint_descriptor *isoc_rx_ep;
HC_IF *hci_if;
int (*connect)(btusbdev_t *dev, int flag);
void (*disconnect)(btusbdev_t *dev);
int (*SetWoble)(btusbdev_t *dev);
u32 chipid;
};//mtkbt_dev_t;
#define BT_INST(dev) (dev)
u8 LDbtusb_getWoBTW(void);
int Ldbtusb_connect (btusbdev_t *dev, int falg);
VOID *os_memcpy(VOID *dst, const VOID *src, UINT32 len);
VOID *os_memmove(VOID *dest, const void *src,UINT32 len);
VOID *os_memset(VOID *s, int c, size_t n);
VOID *os_kzalloc(size_t size, unsigned int flags);
void LD_load_code_from_bin(unsigned char **image, char *bin_name, char *path, mtkbt_dev_t *dev,u32 *code_len);
int do_setMtkBT(cmd_tbl_t *cmdtp, int flag, int argc, char * const argv[]);
int do_getMtkBTWakeT(cmd_tbl_t *cmdtp, int flag, int argc, char * const argv[]);
#endif

View File

@@ -0,0 +1,136 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2019 MediaTek Inc.
*/
#ifndef _GENERIC_ERRNO_H
#define _GENERIC_ERRNO_H
#define EPERM 1 /* Operation not permitted */
#define ENOENT 2 /* No such file or directory */
#define ESRCH 3 /* No such process */
#define EINTR 4 /* Interrupted system call */
#define EIO 5 /* I/O error */
#define ENXIO 6 /* No such device or address */
#define E2BIG 7 /* Argument list too long */
#define ENOEXEC 8 /* Exec format error */
#define EBADF 9 /* Bad file number */
#define ECHILD 10 /* No child processes */
#define EAGAIN 11 /* Try again */
#define ENOMEM 12 /* Out of memory */
#define EACCES 13 /* Permission denied */
#define EFAULT 14 /* Bad address */
#define ENOTBLK 15 /* Block device required */
#define EBUSY 16 /* Device or resource busy */
#define EEXIST 17 /* File exists */
#define EXDEV 18 /* Cross-device link */
#define ENODEV 19 /* No such device */
#define ENOTDIR 20 /* Not a directory */
#define EISDIR 21 /* Is a directory */
#define EINVAL 22 /* Invalid argument */
#define ENFILE 23 /* File table overflow */
#define EMFILE 24 /* Too many open files */
#define ENOTTY 25 /* Not a typewriter */
#define ETXTBSY 26 /* Text file busy */
#define EFBIG 27 /* File too large */
#define ENOSPC 28 /* No space left on device */
#define ESPIPE 29 /* Illegal seek */
#define EROFS 30 /* Read-only file system */
#define EMLINK 31 /* Too many links */
#define EPIPE 32 /* Broken pipe */
#define EDOM 33 /* Math argument out of domain of func */
#define ERANGE 34 /* Math result not representable */
#define EDEADLK 35 /* Resource deadlock would occur */
#define ENAMETOOLONG 36 /* File name too long */
#define ENOLCK 37 /* No record locks available */
#define ENOSYS 38 /* Function not implemented */
#define ENOTEMPTY 39 /* Directory not empty */
#define ELOOP 40 /* Too many symbolic links encountered */
#define EWOULDBLOCK EAGAIN /* Operation would block */
#define ENOMSG 42 /* No message of desired type */
#define EIDRM 43 /* Identifier removed */
#define ECHRNG 44 /* Channel number out of range */
#define EL2NSYNC 45 /* Level 2 not synchronized */
#define EL3HLT 46 /* Level 3 halted */
#define EL3RST 47 /* Level 3 reset */
#define ELNRNG 48 /* Link number out of range */
#define EUNATCH 49 /* Protocol driver not attached */
#define ENOCSI 50 /* No CSI structure available */
#define EL2HLT 51 /* Level 2 halted */
#define EBADE 52 /* Invalid exchange */
#define EBADR 53 /* Invalid request descriptor */
#define EXFULL 54 /* Exchange full */
#define ENOANO 55 /* No anode */
#define EBADRQC 56 /* Invalid request code */
#define EBADSLT 57 /* Invalid slot */
#define EDEADLOCK EDEADLK
#define EBFONT 59 /* Bad font file format */
#define ENOSTR 60 /* Device not a stream */
#define ENODATA 61 /* No data available */
#define ETIME 62 /* Timer expired */
#define ENOSR 63 /* Out of streams resources */
#define ENONET 64 /* Machine is not on the network */
#define ENOPKG 65 /* Package not installed */
#define EREMOTE 66 /* Object is remote */
#define ENOLINK 67 /* Link has been severed */
#define EADV 68 /* Advertise error */
#define ESRMNT 69 /* Srmount error */
#define ECOMM 70 /* Communication error on send */
#define EPROTO 71 /* Protocol error */
#define EMULTIHOP 72 /* Multihop attempted */
#define EDOTDOT 73 /* RFS specific error */
#define EBADMSG 74 /* Not a data message */
#define EOVERFLOW 75 /* Value too large for defined data type */
#define ENOTUNIQ 76 /* Name not unique on network */
#define EBADFD 77 /* File descriptor in bad state */
#define EREMCHG 78 /* Remote address changed */
#define ELIBACC 79 /* Can not access a needed shared library */
#define ELIBBAD 80 /* Accessing a corrupted shared library */
#define ELIBSCN 81 /* .lib section in a.out corrupted */
#define ELIBMAX 82 /* Attempting to link in too many shared libraries */
#define ELIBEXEC 83 /* Cannot exec a shared library directly */
#define EILSEQ 84 /* Illegal byte sequence */
#define ERESTART 85 /* Interrupted system call should be restarted */
#define ESTRPIPE 86 /* Streams pipe error */
#define EUSERS 87 /* Too many users */
#define ENOTSOCK 88 /* Socket operation on non-socket */
#define EDESTADDRREQ 89 /* Destination address required */
#define EMSGSIZE 90 /* Message too long */
#define EPROTOTYPE 91 /* Protocol wrong type for socket */
#define ENOPROTOOPT 92 /* Protocol not available */
#define EPROTONOSUPPORT 93 /* Protocol not supported */
#define ESOCKTNOSUPPORT 94 /* Socket type not supported */
#define EOPNOTSUPP 95 /* Operation not supported on transport endpoint */
#define EPFNOSUPPORT 96 /* Protocol family not supported */
#define EAFNOSUPPORT 97 /* Address family not supported by protocol */
#define EADDRINUSE 98 /* Address already in use */
#define EADDRNOTAVAIL 99 /* Cannot assign requested address */
#define ENETDOWN 100 /* Network is down */
#define ENETUNREACH 101 /* Network is unreachable */
#define ENETRESET 102 /* Network dropped connection because of reset */
#define ECONNABORTED 103 /* Software caused connection abort */
#define ECONNRESET 104 /* Connection reset by peer */
#define ENOBUFS 105 /* No buffer space available */
#define EISCONN 106 /* Transport endpoint is already connected */
#define ENOTCONN 107 /* Transport endpoint is not connected */
#define ESHUTDOWN 108 /* Cannot send after transport endpoint shutdown */
#define ETOOMANYREFS 109 /* Too many references: cannot splice */
#define ETIMEDOUT 110 /* Connection timed out */
#define ECONNREFUSED 111 /* Connection refused */
#define EHOSTDOWN 112 /* Host is down */
#define EHOSTUNREACH 113 /* No route to host */
#define EALREADY 114 /* Operation already in progress */
#define EINPROGRESS 115 /* Operation now in progress */
#define ESTALE 116 /* Stale NFS file handle */
#define EUCLEAN 117 /* Structure needs cleaning */
#define ENOTNAM 118 /* Not a XENIX named type file */
#define ENAVAIL 119 /* No XENIX semaphores available */
#define EISNAM 120 /* Is a named type file */
#define EREMOTEIO 121 /* Remote I/O error */
#define EDQUOT 122 /* Quota exceeded */
#define ENOMEDIUM 123 /* No medium found */
#define EMEDIUMTYPE 124 /* Wrong medium type */
#endif

View File

@@ -0,0 +1,556 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2019 MediaTek Inc.
*/
#include <command.h>
#include <common.h>
#include <ShareType.h>
#include <CusConfig.h>
#include <MsVfs.h>
#include <MsDebug.h>
#include <usb.h>
#include <MsSystem.h>
#include <stdio.h>
#include <mtk-bt/LD_usbbt.h>
#include <mtk-bt/LD_btmtk_usb.h>
#define MAX_ROOT_PORTS 8
usb_vid_pid array_mtk_vid_pid[] = {
{0x0E8D, 0x7668, "MTK7668"}, // 7668
{0x0E8D, 0x76A0, "MTK7662T"}, // 7662T
{0x0E8D, 0x76A1, "MTK7632T"}, // 7632T
{0x0E8D, 0x7663, "MTK7663"}, //7663
{0x0E8D, 0x7961, "MTK7961"}, //7961
};
int max_mtk_wifi_id = (sizeof(array_mtk_vid_pid) / sizeof(array_mtk_vid_pid[0]));
usb_vid_pid *pmtk_wifi = &array_mtk_vid_pid[0];
static mtkbt_dev_t *g_DrvData = NULL;
VOID *os_memcpy(VOID *dst, const VOID *src, UINT32 len)
{
return memcpy(dst, src, len);
}
VOID *os_memmove(VOID *dest, const void *src,UINT32 len)
{
return memmove(dest, src, len);
}
VOID *os_memset(VOID *s, int c, size_t n)
{
return memset(s,c,n);
}
VOID *os_kzalloc(size_t size, unsigned int flags)
{
VOID *ptr = malloc(size);
if (ptr == NULL) {
usb_debug("malloc is fail, ptr is %p\n", ptr);
return ptr;
}
os_memset(ptr, 0, size);
return ptr;
}
int LD_load_code(unsigned char **image, char *partition, char *file, mtkbt_dev_t *dev, u32 *code_len)
{
if (vfs_mount(partition) != 0) {
usb_debug("vfs_mount %s fail\n", partition);
return -1;
}
*code_len = vfs_getsize(file);
if (*code_len == 0) {
usb_debug("Get file size fail\n");
return -1;
}
// malloc buffer to store bt patch file data
*image = malloc(*code_len);
if (*image == NULL) {
usb_debug("malloc fail\n");
*code_len = 0;
return -1;
}
if (vfs_read(*image, file, 0, *code_len) != 0) {
usb_debug("vfs_read %s fail\n", file);
free(*image);
*image = NULL;
*code_len = 0;
return -1;
}
UBOOT_DEBUG("Load file(%s:%s) OK\n", partition, file);
UBOOT_DUMP((unsigned int)*image, 0x200);
return 0;
}
void LD_load_code_from_bin(unsigned char **image, char *bin_name, char *path, mtkbt_dev_t *dev, u32 *code_len)
{
#define ENV_BT_FW_PATH "BTFWBinPath"
#define PARTION_NUM 6
char mtk_patch_bin_patch[128] = "\0";
char *bt_env;
char *partition[PARTION_NUM] = {"cusdata", "tvconfig", "vendor", "userdata", "system", "APP"};
int i = 0;
/** implement by mstar/MTK
* path: /system/etc/firmware/mt76XX_patch_eX_hdr.bin
* If argument "path" is NULL, access "/etc/firmware" directly like as request_firmware
* if argument "path" is not NULL, so far only support directory "userdata"
* NOTE: latest vfs_mount seems decided this time access directory
*/
if (path) {
(void)snprintf(mtk_patch_bin_patch, sizeof(mtk_patch_bin_patch), "%s/%s", path, bin_name);
printf("File: %s\n", mtk_patch_bin_patch);
} else {
#if (ENABLE_MODULE_ANDROID_BOOT == 1)
(void)snprintf(mtk_patch_bin_patch, sizeof(mtk_patch_bin_patch), "%s/%s", "/firmware", bin_name);
#else
(void)snprintf(mtk_patch_bin_patch, sizeof(mtk_patch_bin_patch), "%s/%s", "/krl/wifi/ralink/firmware", bin_name);
#endif
printf("mtk_patch_bin_patch: %s\n", mtk_patch_bin_patch);
}
bt_env = getenv(ENV_BT_FW_PATH);
if (bt_env == NULL) {
/* get PATH failed */
printf("bt_env is NULL\n");
for (i = 0; i < PARTION_NUM; i++) {
if (LD_load_code(image, partition[i], mtk_patch_bin_patch, dev, code_len) == 0)
return;
}
} else {
printf("bt_env: %s\n", bt_env);
LD_load_code(image, bt_env, mtk_patch_bin_patch, dev, code_len);
}
return;
}
static int usb_bt_bulk_msg(
mtkbt_dev_t *dev,
u32 epType,
u8 *data,
int size,
int* realsize,
int timeout /* not used */
)
{
int ret =0 ;
if(dev == NULL || dev->udev == NULL || dev->bulk_tx_ep == NULL)
{
usb_debug("bulk out error 00\n");
return -1;
}
if(epType == MTKBT_BULK_TX_EP)
{
// usb_debug_raw(data, size, "%s: usb_bulk_msg:", __func__);
ret = usb_bulk_msg(dev->udev,usb_sndbulkpipe(dev->udev,dev->bulk_tx_ep->bEndpointAddress),data,size,realsize,2000);
if(ret)
{
usb_debug("bulk out error 01, ret = %d\n", ret);
return -1;
}
if(*realsize == size)
{
//usb_debug("bulk out success 01,size =0x%x\n",size);
return 0;
}
else
{
usb_debug("bulk out fail 02,size =0x%x,realsize =0x%x\n",size,*realsize);
}
}
return -1;
}
static int usb_bt_control_msg(
mtkbt_dev_t *dev,
u32 epType,
u8 request,
u8 requesttype,
u16 value,
u16 index,
u8 *data,
int data_length,
int timeout /* not used */
)
{
int ret = -1;
if(epType == MTKBT_CTRL_TX_EP)
{
// usb_debug_raw(data, data_length, "%s: usb_control_msg:", __func__);
ret = usb_control_msg(dev->udev, usb_sndctrlpipe(dev->udev, 0), request,
requesttype, value, index, data, data_length,timeout);
}
else if (epType == MTKBT_CTRL_RX_EP)
{
ret = usb_control_msg(dev->udev, usb_rcvctrlpipe(dev->udev, 0), request,
requesttype, value, index, data, data_length,timeout);
}
else
{
usb_debug("control message wrong Type =0x%x\n",epType);
}
if (ret < 0)
{
usb_debug("Err1(%d)\n", ret);
return ret;
}
return ret;
}
static int usb_bt_interrupt_msg(
mtkbt_dev_t *dev,
u32 epType,
u8 *data,
int size,
int* realsize,
int timeout /* unit of 1ms */
)
{
int ret = -1;
usb_debug("epType = 0x%x\n",epType);
if(epType == MTKBT_INTR_EP)
{
ret = usb_submit_int_msg(dev->udev,usb_rcvintpipe(dev->udev,dev->intr_ep->bEndpointAddress),data,size,realsize,timeout);
}
if(ret < 0 )
{
usb_debug("Err1(%d)\n", ret);
return ret;
}
usb_debug("ret = 0x%x\n",ret);
return ret;
}
static HC_IF usbbt_host_interface =
{
usb_bt_bulk_msg,
usb_bt_control_msg,
usb_bt_interrupt_msg,
};
static void Ldbtusb_diconnect (btusbdev_t *dev)
{
LD_btmtk_usb_disconnect(g_DrvData);
if(g_DrvData)
{
os_kfree(g_DrvData);
}
g_DrvData = NULL;
}
static int Ldbtusb_SetWoble(btusbdev_t *dev)
{
if(!g_DrvData)
{
usb_debug("usb set woble fail ,because no drv data\n");
return -1;
}
else
{
LD_btmtk_usb_SetWoble(g_DrvData);
usb_debug("usb set woble end\n");
}
return 0;
}
static u32 chipid;
int Ldbtusb_connect (btusbdev_t *dev, int flag)
{
int ret = 0;
// For Mstar
struct usb_endpoint_descriptor *ep_desc;
struct usb_interface *iface;
int i;
iface = &dev->config.if_desc[0];
if(g_DrvData == NULL)
{
g_DrvData = os_kmalloc(sizeof(mtkbt_dev_t),MTK_GFP_ATOMIC);
if(!g_DrvData)
{
usb_debug("Not enough memory for mtkbt virtual usb device.\n");
return -1;
}
else
{
os_memset(g_DrvData,0,sizeof(mtkbt_dev_t));
g_DrvData->udev = dev;
g_DrvData->connect = Ldbtusb_connect;
g_DrvData->disconnect = Ldbtusb_diconnect;
g_DrvData->SetWoble = Ldbtusb_SetWoble;
}
}
else
{
return -1;
}
// For Mstar
for (i = 0; i < iface->desc.bNumEndpoints; i++)
{
ep_desc = &iface->ep_desc[i];
usb_debug("dev->endpoints[%d].bmAttributes = 0x%x\n", i, ep_desc->bmAttributes);
usb_debug("dev->endpoints[%d].bEndpointAddress = 0x%x\n", i, ep_desc->bEndpointAddress);
if ((ep_desc->bmAttributes & USB_ENDPOINT_XFERTYPE_MASK) == USB_ENDPOINT_XFER_BULK)
{
if (ep_desc->bEndpointAddress & USB_DIR_IN)
{
usb_debug("set endpoints[%d] to bulk_rx_ep\n", i);
g_DrvData->bulk_rx_ep = ep_desc;
}
else
{
if (ep_desc->bEndpointAddress != 0x1) {
usb_debug("set endpoints[%d] to bulk_tx_ep\n", i);
g_DrvData->bulk_tx_ep = ep_desc;
}
}
continue;
}
/* is it an interrupt endpoint? */
if (((ep_desc->bmAttributes & USB_ENDPOINT_XFERTYPE_MASK) == USB_ENDPOINT_XFER_INT)
&& ep_desc->bEndpointAddress != 0x8f)
{
usb_debug("set endpoints[%d] to intr_ep\n", i);
g_DrvData->intr_ep = ep_desc;
continue;
}
}
if (!g_DrvData->intr_ep || !g_DrvData->bulk_tx_ep || !g_DrvData->bulk_rx_ep)
{
os_kfree(g_DrvData);
g_DrvData = NULL;
usb_debug("btmtk_usb_probe end Error 3\n");
return -1;
}
/* Init HostController interface */
g_DrvData->hci_if = &usbbt_host_interface;
g_DrvData->chipid = chipid;
/* btmtk init */
ret = LD_btmtk_usb_probe(g_DrvData, flag);
if (ret != 0)
{
usb_debug("usb probe fail\n");
if(g_DrvData)
{
os_kfree(g_DrvData);
}
g_DrvData = NULL;
return -1;
}
else
{
usb_debug("usbbt probe success\n");
}
return ret;
}
u8 LDbtusb_getWoBTW(void)
{
return LD_btmtk_usb_getWoBTW();
}
static int checkUsbDevicePort(struct usb_device *udev, usb_vid_pid *pmtk_dongle, u8 port)
{
struct usb_device *pdev = NULL;
int i;
int dongleCount = 0;
#if defined (CONFIG_USB_PREINIT)
usb_stop(port);
if (usb_post_init(port) == 0)
#else
if (usb_init(port) == 0)
#endif
{
for (i = 0; i < usb_get_dev_num(); i++) {
pdev = usb_get_dev_index(i); // get device
if (pdev != NULL) {
for (dongleCount = 0; dongleCount < max_mtk_wifi_id; dongleCount++) {
if ((pdev->descriptor.idVendor == pmtk_dongle[dongleCount].vid)
&& (pdev->descriptor.idProduct == pmtk_dongle[dongleCount].pid)) {
UBOOT_TRACE("OK\n");
memcpy(udev, pdev, sizeof(struct usb_device));
chipid = pmtk_dongle[dongleCount].pid;
return 0;
}
}
}
}
}
return -1;
}
static int findUsbDevice(struct usb_device* udev)
{
int ret = -1;
u8 idx = 0;
char portNumStr[10] = "\0";
char* pBTUsbPort = NULL;
UBOOT_TRACE("IN\n");
if(udev == NULL)
{
UBOOT_ERROR("udev can not be NULL\n");
return -1;
}
#define BT_USB_PORT "bt_usb_port"
pBTUsbPort = getenv(BT_USB_PORT);
if(pBTUsbPort != NULL)
{
// search mtk bt usb port
idx = atoi(pBTUsbPort);
usb_debug("find mtk bt usb device from usb prot[%d]\n", idx);
ret = checkUsbDevicePort(udev, pmtk_wifi, idx);
if(ret == 0)
{
return 0;
}
}
// not find mt bt usb device from given usb port, so poll every usb port.
#if defined(ENABLE_FIFTH_EHC)
const char u8UsbPortCount = 5;
#elif defined(ENABLE_FOURTH_EHC)
const char u8UsbPortCount = 4;
#elif defined(ENABLE_THIRD_EHC)
const char u8UsbPortCount = 3;
#elif defined(ENABLE_SECOND_EHC)
const char u8UsbPortCount = 2;
#else
const char u8UsbPortCount = 1;
#endif
for(idx = 0; idx < u8UsbPortCount; idx++)
{
ret = checkUsbDevicePort(udev, pmtk_wifi, idx);
if(ret == 0)
{
// set bt_usb_port to store mt bt usb device port
(void)snprintf(portNumStr, sizeof(portNumStr), "%d", idx);
setenv(BT_USB_PORT, portNumStr);
saveenv();
return 0;
}
}
if(pBTUsbPort != NULL)
{
// env BT_USB_PORT is involid, so delete it
setenv(BT_USB_PORT, NULL);
saveenv();
}
UBOOT_ERROR("Not find usb device\n");
return -1;
}
int do_setMtkBT(cmd_tbl_t *cmdtp, int flag, int argc, char * const argv[])
{
int ret = 0;
char* pBTUsbPort = NULL;
int usbPort = 0;
struct usb_device udev;
memset(&udev, 0, sizeof(struct usb_device));
UBOOT_TRACE("IN\n");
if (argc < 1)
{
cmd_usage(cmdtp);
return -1;
}
// MTK USB controller
ret = findUsbDevice(&udev);
if (ret != 0)
{
UBOOT_ERROR("find bt usb device failed\n");
return -1;
}
ret = Ldbtusb_connect(&udev, 0);
if(ret != 0){
UBOOT_ERROR("connect to bt usb device failed\n");
return -1;
}
ret = Ldbtusb_SetWoble(&udev);
if(ret != 0)
{
UBOOT_ERROR("set bt usb device woble cmd failed\n");
return -1;
}
usb_debug("ready to do usb_stop\n");
pBTUsbPort = getenv(BT_USB_PORT);
if(pBTUsbPort != NULL)
{
// search mtk bt usb port
usbPort = atoi(pBTUsbPort);
if (usbPort < 0 || usbPort >= MAX_ROOT_PORTS) {
UBOOT_ERROR("usbPort(%d) is not in correct scope\n", usbPort);
return -1;
}
usb_debug("stop usb port: %d\n",usbPort);
if(usb_stop(usbPort) != 0){
usb_debug("usb_stop fail\n");
}
}else{
usb_debug("no BT_USB_PORT\n");
}
UBOOT_TRACE("OK\n");
return ret;
}
int do_getMtkBTWakeT(cmd_tbl_t *cmdtp, int flag, int argc, char * const argv[])
{
int ret = 0;
struct usb_device udev;
memset(&udev, 0, sizeof(struct usb_device));
UBOOT_TRACE("IN\n");
if (argc < 1)
{
cmd_usage(cmdtp);
return -1;
}
// MTK USB controller
ret = findUsbDevice(&udev);
if (ret != 0)
{
UBOOT_ERROR("find bt usb device failed\n");
return -1;
}
ret = Ldbtusb_connect(&udev, 1);
if(ret != 0)
{
UBOOT_ERROR("connect to bt usb device failed\n");
return -1;
}
if(ret != 0)
{
UBOOT_ERROR("set bt usb device woble cmd failed\n");
return -1;
}
UBOOT_TRACE("OK\n");
return ret;
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,64 @@
acpi_dock_ops
address_space_operations
backlight_ops
block_device_operations
clk_ops
comedi_lrange
component_ops
dentry_operations
dev_pm_ops
dma_map_ops
driver_info
drm_connector_funcs
drm_encoder_funcs
drm_encoder_helper_funcs
ethtool_ops
extent_io_ops
file_lock_operations
file_operations
hv_ops
ide_dma_ops
ide_port_ops
inode_operations
intel_dvo_dev_ops
irq_domain_ops
item_operations
iwl_cfg
iwl_ops
kgdb_arch
kgdb_io
kset_uevent_ops
lock_manager_operations
machine_desc
microcode_ops
mlxsw_reg_info
mtrr_ops
neigh_ops
net_device_ops
nlmsvc_binding
nvkm_device_chip
of_device_id
pci_raw_ops
pipe_buf_operations
platform_hibernation_ops
platform_suspend_ops
proto_ops
regmap_access_table
rpc_pipe_ops
rtc_class_ops
sd_desc
seq_operations
sirfsoc_padmux
snd_ac97_build_ops
snd_soc_component_driver
soc_pcmcia_socket_ops
stacktrace_ops
sysfs_ops
tty_operations
uart_ops
usb_mon_operations
v4l2_ctrl_ops
v4l2_ioctl_ops
vm_operations_struct
wacom_features
wd_ops

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,33 @@
LOCAL_PATH := $(call my-dir)
LOCAL_PATH_B := $(LOCAL_PATH)
BT_PLATFORM:=$(subst MTK_CONSYS_MT,,$(MTK_BT_CHIP))
$(info [BT_Drv] MTK_BT_SUPPORT = $(MTK_BT_SUPPORT))
$(info [BT_Drv] MTK_BT_CHIP = $(MTK_BT_CHIP))
ifeq ($(strip $(MTK_BT_SUPPORT)), yes)
ifneq (true,$(strip $(TARGET_NO_KERNEL)))
# connac1x
LOG_TAG := [BT_Drv][wmt]
BT_PLATFORM := connac1x
include $(LOCAL_PATH_B)/wmt/Android.mk
# connac20
LOG_TAG := [BT_Drv][btif]
BT_PLATFORM := 6885
include $(LOCAL_PATH_B)/btif/Android.mk
BT_PLATFORM := 6893
include $(LOCAL_PATH_B)/btif/Android.mk
BT_PLATFORM := 6877
include $(LOCAL_PATH_B)/btif/Android.mk
BT_PLATFORM := 6983
include $(LOCAL_PATH_B)/btif/Android.mk
BT_PLATFORM := 6879
include $(LOCAL_PATH_B)/btif/Android.mk
BT_PLATFORM := 6895
include $(LOCAL_PATH_B)/btif/Android.mk
endif
endif
#dirs := btif
#include $(call all-named-subdir-makefiles, $(dirs))

View File

@@ -0,0 +1,111 @@
export KERNEL_SRC := /lib/modules/$(shell uname -r)/build
#################### Configurations ####################
# Compile Options for bt driver configuration.
CONFIG_SUPPORT_BT_DL_WIFI_PATCH=y
CONFIG_SUPPORT_BLUEZ=n
CONFIG_SUPPORT_DVT=n
CONFIG_SUPPORT_MULTI_DEV_NODE=n
ifeq ($(CONFIG_SUPPORT_BT_DL_WIFI_PATCH), y)
ccflags-y += -DCFG_SUPPORT_BT_DL_WIFI_PATCH=1
else
ccflags-y += -DCFG_SUPPORT_BT_DL_WIFI_PATCH=0
endif
ifeq ($(CONFIG_SUPPORT_BLUEZ), y)
ccflags-y += -DCFG_SUPPORT_BLUEZ=1
ccflags-y += -DCFG_SUPPORT_HW_DVT=0
else
ccflags-y += -DCFG_SUPPORT_BLUEZ=0
ccflags-y += -DCFG_SUPPORT_HW_DVT=1
endif
ifeq ($(CONFIG_SUPPORT_DVT), y)
ccflags-y += -DCFG_SUPPORT_DVT=1
else
ccflags-y += -DCFG_SUPPORT_DVT=0
endif
ifeq ($(CONFIG_SUPPORT_DVT), y)
ccflags-y += -DCFG_SUPPORT_DVT=1
else
ccflags-y += -DCFG_SUPPORT_DVT=0
endif
ifeq ($(CONFIG_SUPPORT_MULTI_DEV_NODE), y)
ccflags-y += -DCFG_SUPPORT_MULTI_DEV_NODE=1
else
ccflags-y += -DCFG_SUPPORT_MULTI_DEV_NODE=0
endif
#################### Configurations ####################
# For chip interface, driver supports "usb", "sdio", "uart" and "btif"
MTK_CHIP_IF := usb
ifeq ($(MTK_CHIP_IF), sdio)
MOD_NAME = btmtk_sdio_unify
CFILES := sdio/btmtksdio.c btmtk_woble.c btmtk_buffer_mode.c btmtk_chip_reset.c
ccflags-y += -DCHIP_IF_SDIO
ccflags-y += -DSDIO_DEBUG=0
ccflags-y += -I$(src)/include/sdio
else ifeq ($(MTK_CHIP_IF), usb)
MOD_NAME = btmtk_usb_unify
CFILES := usb/btmtkusb.c btmtk_woble.c btmtk_chip_reset.c
ccflags-y += -DCHIP_IF_USB
ccflags-y += -I$(src)/include/usb
else ifeq ($(MTK_CHIP_IF), uart)
MOD_NAME = btmtk_uart_unify
CFILES := uart/btmtk_uart_main.c
ccflags-y += -DCHIP_IF_UART
ccflags-y += -I$(src)/include/uart
else
MOD_NAME = btmtkbtif_unify
CFILES := btif/btmtk_btif.c
ccflags-y += -DCHIP_IF_BTIF
ccflags-y += -I$(src)/include/btif
endif
CFILES += btmtk_main.c btmtk_fw_log.c
ccflags-y += -I$(src)/include/ -I$(src)/
$(MOD_NAME)-objs := $(CFILES:.c=.o)
obj-m += $(MOD_NAME).o
#VPATH = /opt/toolchains/gcc-linaro-aarch64-linux-gnu-4.9-2014.09_linux
#UART_MOD_NAME = btmtk_uart
#UART_CFILES := \
# btmtk_uart_main.c
#$(UART_MOD_NAME)-objs := $(UART_CFILES:.c=.o)
###############################################################################
# Common
###############################################################################
#obj-m := $(UART_MOD_NAME).o
all:
make -C $(KERNEL_SRC) M=$(PWD) modules
clean:
make -C $(KERNEL_SRC) M=$(PWD) clean
# Check coding style
# export IGNORE_CODING_STYLE_RULES := NEW_TYPEDEFS,LEADING_SPACE,CODE_INDENT,SUSPECT_CODE_INDENT
ccs:
./util/checkpatch.pl -f ./sdio/btmtksdio.c
./util/checkpatch.pl -f ./include/sdio/btmtk_sdio.h
./util/checkpatch.pl -f ./include/btmtk_define.h
./util/checkpatch.pl -f ./include/btmtk_drv.h
./util/checkpatch.pl -f ./include/btmtk_chip_if.h
./util/checkpatch.pl -f ./include/btmtk_main.h
./util/checkpatch.pl -f ./include/btmtk_buffer_mode.h
./util/checkpatch.pl -f ./include/btmtk_fw_log.h
./util/checkpatch.pl -f ./include/btmtk_woble.h
./util/checkpatch.pl -f ./include/uart/btmtk_uart.h
./util/checkpatch.pl -f ./uart/btmtk_uart_main.c
./util/checkpatch.pl -f ./include/usb/btmtk_usb.h
./util/checkpatch.pl -f ./usb/btmtkusb.c
./util/checkpatch.pl -f btmtk_fw_log.c
./util/checkpatch.pl -f btmtk_main.c
./util/checkpatch.pl -f btmtk_buffer_mode.c
./util/checkpatch.pl -f btmtk_woble.c
./util/checkpatch.pl -f btmtk_chip_reset.c

View File

@@ -0,0 +1,21 @@
#Please follow the example pattern
#There are some SPACES between parameter and parameter
[Country Code]
[Index] BR_EDR_PWR_MODE, | EDR_MAX_TX_PWR, | BLE_DEFAULT_TX_PWR, | BLE_DEFAULT_TX_PWR_2M, | BLE_LR_S2, | BLE_LR_S8
[AU,SA]
[BT0] 1, 1.75, 1.5, 1, 1, 1
[BT1] 1, 2.75, 2.5, 2, 1, 1
[TW,US]
[BT0] 1, 14, 15, 16, 20, 20
[BT1] 1, 17, 17, 17, 20, 20
[JP]
[BT0] 0, 5.25, -3, -3, -2, -2
[BT1] 0, 5.5, -2.5, -2, -2, -2
[DE]
[BT0] 0, -32, -29, -29, -29, -29
[BT1] 0, -32, -29, -29, -29, -29

View File

@@ -0,0 +1,16 @@
LOCAL_PATH := $(call my-dir)
include $(CLEAR_VARS)
LOCAL_MODULE := bt_drv_$(BT_PLATFORM).ko
LOCAL_PROPRIETARY_MODULE := true
LOCAL_MODULE_OWNER := mtk
LOCAL_INIT_RC := init.bt_drv.rc
LOCAL_SRC_FILES := $(patsubst $(LOCAL_PATH)/%,%,$(shell find $(LOCAL_PATH) -type f -name '*.[cho]')) Makefile
LOCAL_REQUIRED_MODULES := conninfra.ko
LOCAL_REQUIRED_MODULES += connfem.ko
include $(MTK_KERNEL_MODULE)
BT_OPTS := BT_PLATFORM=$(BT_PLATFORM) LOG_TAG=$(LOG_TAG)
$(info $(LOG_TAG) BT_OPTS = $(BT_OPTS))
$(linked_module): OPTS += $(BT_OPTS)

View File

@@ -0,0 +1,27 @@
LOCAL_PATH := $(call my-dir)
ifneq (true,$(strip $(TARGET_NO_KERNEL)))
include $(CLEAR_VARS)
LOCAL_MODULE := bt_drv.ko
LOCAL_PROPRIETARY_MODULE := true
LOCAL_MODULE_OWNER := mtk
LOCAL_INIT_RC := init.bt_drv.rc
LOCAL_SRC_FILES := $(patsubst $(LOCAL_PATH)/%,%,$(shell find $(LOCAL_PATH) -type f -name '*.[cho]')) Makefile
LOCAL_REQUIRED_MODULES := conninfra.ko
include $(MTK_KERNEL_MODULE)
#### Copy Module.symvers from $(LOCAL_REQUIRED_MODULES) to this module #######
#### For symbol link (when CONFIG_MODVERSIONS is defined)
CONN_INFRA_EXPORT_SYMBOL := $(subst $(LOCAL_MODULE),$(LOCAL_REQUIRED_MODULES),$(intermediates)/LINKED)/Module.symvers
$(CONN_INFRA_EXPORT_SYMBOL): $(subst $(LOCAL_MODULE),$(LOCAL_REQUIRED_MODULES),$(linked_module))
BT_EXPORT_SYMBOL := $(intermediates)/LINKED/Module.symvers
$(BT_EXPORT_SYMBOL).in: $(intermediates)/LINKED/% : $(CONN_INFRA_EXPORT_SYMBOL)
$(copy-file-to-target)
cp $(CONN_INFRA_EXPORT_SYMBOL) $(BT_EXPORT_SYMBOL)
$(linked_module): $(BT_EXPORT_SYMBOL).in
endif

View File

@@ -0,0 +1,100 @@
###############################################################################
# Bluetooth character device driver
###############################################################################
# ---------------------------------------------------
# Compile Options
# ---------------------------------------------------
ifndef TOP
TOP := $(srctree)/..
endif
ifneq ($(KERNEL_OUT),)
ccflags-y += -imacros $(KERNEL_OUT)/include/generated/autoconf.h
endif
# Force build fail on modpost warning
KBUILD_MODPOST_FAIL_ON_WARNINGS := y
# platform
ifeq ($(CONFIG_WLAN_DRV_BUILD_IN),y)
$(info build-in mode!)
$(info _MTK_BT_CHIP = $(_MTK_BT_CHIP))
# _MTK_BT_CHIP comes from conninfra setting
BT_PLATFORM = $(patsubst MTK_CONSYS_MT%,%,$(strip $(_MTK_BT_CHIP)))
endif
ccflags-y += -D FW_LOG_DEFAULT_ON=0
ccflags-y += -D CONNAC20_CHIPID=$(BT_PLATFORM)
$(info $(LOG_TAG) BT_PLATFORM = $(BT_PLATFORM))
$(info $(LOG_TAG) srctree = $(srctree))
# ---------------------------------------------------
# Compile Options: set feature flag (1: enable, 0: disable)
# ---------------------------------------------------
# build btif interface
ccflags-y += -D CHIP_IF_BTIF
# Use device node or hci_dev as native interface
ccflags-y += -D USE_DEVICE_NODE=1
# Customized fw update feature
ccflags-y += -D CUSTOMER_FW_UPDATE=0
# pm_qos control
ccflags-y += -D PM_QOS_CONTROL=0
# No function, only for build pass
ccflags-y += -D CONFIG_MP_WAKEUP_SOURCE_SYSFS_STAT=1
# Customized feature, load 1b fw bin
#ccflags-y += -D BT_CUS_FEATURE
# ---------------------------------------------------
# Include Path
# ---------------------------------------------------
CONN_INFRA_SRC := $(srctree)/drivers/misc/mediatek/connectivity/conninfra
CONNFEM_SRC := $(srctree)/drivers/misc/mediatek/connectivity/connfem
WMT_SRC := $(srctree)/drivers/misc/mediatek/connectivity/common
BTIF_SRC := $(srctree)/drivers/misc/mediatek/btif
ccflags-y += -I$(srctree)/drivers/misc/mediatek/connectivity/common
ccflags-y += -I$(srctree)/drivers/misc/mediatek/include/mt-plat/
ccflags-y += -I$(srctree)/drivers/misc/mediatek/connectivity/power_throttling
ccflags-y += -I$(srctree)/drivers/gpu/drm/mediatek/mediatek_v2/
ccflags-y += -I$(CONN_INFRA_SRC)/include
ccflags-y += -I$(CONN_INFRA_SRC)/debug_utility/include
ccflags-y += -I$(CONN_INFRA_SRC)/debug_utility/metlog
ccflags-y += -I$(CONN_INFRA_SRC)/debug_utility/
ccflags-y += -I$(CONNFEM_SRC)/include
ccflags-y += -I$(WMT_SRC)/debug_utility
ccflags-y += -I$(BTIF_SRC)/common/inc
ccflags-y += -I$(src)/core/include
ccflags-y += -I$(src)/connsys/connac_2_0
ccflags-y += -I$(src)/../include
ccflags-y += -I$(src)/../include/btif
# ---------------------------------------------------
# Objects List
# ---------------------------------------------------
MODULE_NAME := bt_drv_$(BT_PLATFORM)
ifeq ($(CONFIG_WLAN_DRV_BUILD_IN),y)
obj-y += $(MODULE_NAME).o
else
obj-m += $(MODULE_NAME).o
endif
CORE_OBJS := btmtk_dbg.o btmtk_dbg_tp_evt_if.o btmtk_irq.o btmtk_char_dev.o ../btmtk_fw_log.o ../btmtk_main.o
CHIP_OBJS := btmtk_mt66xx.o
HIF_OBJS := btmtk_btif_main.o btmtk_queue.o
$(MODULE_NAME)-objs += $(CORE_OBJS)
$(MODULE_NAME)-objs += $(HIF_OBJS)
$(MODULE_NAME)-objs += $(CHIP_OBJS)

View File

@@ -0,0 +1,57 @@
CONFIG_MODULE_SIG=n
export KERNEL_SRC := /lib/modules/$(shell uname -r)/build
#################### Configurations ####################
# For chip interface, driver supports "usb", "sdio", "uart" and "btif"
MTK_CHIP_IF := uart
ifeq ($(MTK_CHIP_IF), sdio)
MOD_NAME = btmtksdio
CFILES := btmtk_sdio.c
ccflags-y += -DCHIP_IF_SDIO
else ifeq ($(MTK_CHIP_IF), usb)
MOD_NAME = btmtk_usb
CFILES := btmtkusb.c
ccflags-y += -DCHIP_IF_USB
else ifeq ($(MTK_CHIP_IF), uart)
MOD_NAME = btmtk_uart
CFILES := btmtk_uart_main.c btmtk_mt76xx.c
ccflags-y += -DCHIP_IF_UART
else
MOD_NAME = btmtkbtif
CFILES := btmtk_btif_main.c btmtk_mt66xx.c
ccflags-y += -DCHIP_IF_BTIF
endif
CFILES += btmtk_main.c
ccflags-y += -I$(src)/include/ -I$(src)/
$(MOD_NAME)-objs := $(CFILES:.c=.o)
obj-m += $(MOD_NAME).o
#VPATH = /opt/toolchains/gcc-linaro-aarch64-linux-gnu-4.9-2014.09_linux
#UART_MOD_NAME = btmtk_uart
#UART_CFILES := \
# btmtk_uart_main.c
#$(UART_MOD_NAME)-objs := $(UART_CFILES:.c=.o)
###############################################################################
# Common
###############################################################################
#obj-m := $(UART_MOD_NAME).o
all:
make -C $(KERNEL_SRC) M=$(PWD) modules
clean:
make -C $(KERNEL_SRC) M=$(PWD) clean
# Check coding style
# export IGNORE_CODING_STYLE_RULES := NEW_TYPEDEFS,LEADING_SPACE,CODE_INDENT,SUSPECT_CODE_INDENT
ccs:
./util/checkpatch.pl --no-tree --show-types --max-line-length=120 --ignore $(IGNORE_CODING_STYLE_RULES) -f btmtk_main.c
./util/checkpatch.pl --no-tree --show-types --max-line-length=120 --ignore $(IGNORE_CODING_STYLE_RULES) -f btmtk_sdio.c
./util/checkpatch.pl --no-tree --show-types --max-line-length=120 --ignore $(IGNORE_CODING_STYLE_RULES) -f btmtk_sdio.h
./util/checkpatch.pl --no-tree --show-types --max-line-length=120 --ignore $(IGNORE_CODING_STYLE_RULES) -f btmtk_config.h
./util/checkpatch.pl --no-tree --show-types --max-line-length=120 --ignore $(IGNORE_CODING_STYLE_RULES) -f btmtk_define.h
./util/checkpatch.pl --no-tree --show-types --max-line-length=120 --ignore $(IGNORE_CODING_STYLE_RULES) -f btmtk_drv.h

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,640 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2019 MediaTek Inc.
*/
#include "btmtk_main.h"
#include "btmtk_chip_if.h"
#include "btmtk_fw_log.h"
#include "btmtk_dbg_tp_evt_if.h"
MODULE_LICENSE("Dual BSD/GPL");
#if (USE_DEVICE_NODE == 1)
/*******************************************************************************
* M A C R O S
********************************************************************************
*/
#define BT_DRIVER_NAME "mtk_bt_chrdev"
#define BT_DRIVER_NODE_NAME "stpbt"
/*******************************************************************************
* C O N S T A N T S
********************************************************************************
*/
#define BT_BUFFER_SIZE (2048)
#define FTRACE_STR_LOG_SIZE (256)
#define COMBO_IOC_MAGIC 0xb0
#define COMBO_IOCTL_BT_HOST_DEBUG _IOW(COMBO_IOC_MAGIC, 4, void*)
#define COMBO_IOCTL_BT_INTTRX _IOW(COMBO_IOC_MAGIC, 5, void*)
#define IOCTL_BT_HOST_DEBUG_BUF_SIZE (32)
#define IOCTL_BT_HOST_INTTRX_SIZE (128)
/*******************************************************************************
* D A T A T Y P E S
********************************************************************************
*/
enum chip_reset_state {
CHIP_RESET_NONE,
CHIP_RESET_START,
CHIP_RESET_END,
CHIP_RESET_NOTIFIED
};
/*******************************************************************************
* P U B L I C D A T A
********************************************************************************
*/
/*******************************************************************************
* P R I V A T E D A T A
********************************************************************************
*/
static int32_t BT_devs = 1;
static int32_t BT_major = 192;
module_param(BT_major, uint, 0);
static struct cdev BT_cdev;
static struct class *BT_class;
static struct device *BT_dev;
static uint8_t i_buf[BT_BUFFER_SIZE]; /* Input buffer for read */
static uint8_t o_buf[BT_BUFFER_SIZE]; /* Output buffer for write */
static uint8_t ioc_buf[IOCTL_BT_HOST_INTTRX_SIZE];
extern struct btmtk_dev *g_sbdev;
extern bool g_bt_trace_pt;
extern struct btmtk_btif_dev g_btif_dev;
extern void bthost_debug_init(void);
extern void bthost_debug_save(uint32_t id, uint32_t value, char* desc);
static struct semaphore wr_mtx, rd_mtx;
static struct bt_wake_lock bt_wakelock;
/* Wait queue for poll and read */
static wait_queue_head_t inq;
static DECLARE_WAIT_QUEUE_HEAD(BT_wq);
static int32_t flag;
static int32_t bt_ftrace_flag;
/*
* Reset flag for whole chip reset scenario, to indicate reset status:
* 0 - normal, no whole chip reset occurs
* 1 - reset start
* 2 - reset end, have not sent Hardware Error event yet
* 3 - reset end, already sent Hardware Error event
*/
static uint32_t rstflag = CHIP_RESET_NONE;
static uint8_t HCI_EVT_HW_ERROR[] = {0x04, 0x10, 0x01, 0x00};
static loff_t rd_offset;
/*******************************************************************************
* F U N C T I O N S
********************************************************************************
*/
static int32_t ftrace_print(const uint8_t *str, ...)
{
#ifdef CONFIG_TRACING
va_list args;
uint8_t temp_string[FTRACE_STR_LOG_SIZE];
if (bt_ftrace_flag) {
va_start(args, str);
if (vsnprintf(temp_string, FTRACE_STR_LOG_SIZE, str, args) < 0)
BTMTK_INFO("%s: vsnprintf error", __func__);
va_end(args);
trace_printk("%s\n", temp_string);
}
#endif
return 0;
}
static size_t bt_report_hw_error(uint8_t *buf, size_t count, loff_t *f_pos)
{
size_t bytes_rest, bytes_read;
if (*f_pos == 0)
BTMTK_INFO("Send Hardware Error event to stack to restart Bluetooth");
bytes_rest = sizeof(HCI_EVT_HW_ERROR) - *f_pos;
bytes_read = count < bytes_rest ? count : bytes_rest;
memcpy(buf, HCI_EVT_HW_ERROR + *f_pos, bytes_read);
*f_pos += bytes_read;
return bytes_read;
}
static void bt_state_cb(u_int8_t state)
{
switch (state) {
case FUNC_ON:
rstflag = CHIP_RESET_NONE;
break;
case RESET_START:
rstflag = CHIP_RESET_START;
break;
case FUNC_OFF:
if (rstflag != CHIP_RESET_START) {
rstflag = CHIP_RESET_NONE;
break;
}
case RESET_END:
rstflag = CHIP_RESET_END;
rd_offset = 0;
flag = 1;
wake_up_interruptible(&inq);
wake_up(&BT_wq);
break;
default:
break;
}
return;
}
static void BT_event_cb(void)
{
ftrace_print("%s get called", __func__);
/*
* Hold wakelock for 100ms to avoid system enter suspend in such case:
* FW has sent data to host, STP driver received the data and put it
* into BT rx queue, then send sleep command and release wakelock as
* quick sleep mechanism for low power, BT driver will wake up stack
* hci thread stuck in poll or read.
* But before hci thread comes to read data, system enter suspend,
* hci command timeout timer keeps counting during suspend period till
* expires, then the RTC interrupt wakes up system, command timeout
* handler is executed and meanwhile the event is received.
* This will false trigger FW assert and should never happen.
*/
bt_hold_wake_lock_timeout(&bt_wakelock, 100);
/*
* Finally, wake up any reader blocked in poll or read
*/
flag = 1;
wake_up_interruptible(&inq);
wake_up(&BT_wq);
ftrace_print("%s wake_up triggered", __func__);
}
static unsigned int BT_poll(struct file *filp, poll_table *wait)
{
uint32_t mask = 0;
//bt_dbg_tp_evt(TP_ACT_POLL, 0, 0, NULL);
if ((!btmtk_rx_data_valid() && rstflag == CHIP_RESET_NONE) ||
(rstflag == CHIP_RESET_START) || (rstflag == CHIP_RESET_NOTIFIED)) {
/*
* BT RX queue is empty, or whole chip reset start, or already sent Hardware Error event
* for whole chip reset end, add to wait queue.
*/
poll_wait(filp, &inq, wait);
/*
* Check if condition changes before poll_wait return, in case of
* wake_up_interruptible is called before add_wait_queue, otherwise,
* do_poll will get into sleep and never be waken up until timeout.
*/
if (!((!btmtk_rx_data_valid() && rstflag == CHIP_RESET_NONE) ||
(rstflag == CHIP_RESET_START) || (rstflag == CHIP_RESET_NOTIFIED)))
mask |= POLLIN | POLLRDNORM; /* Readable */
} else {
/* BT RX queue has valid data, or whole chip reset end, have not sent Hardware Error event yet */
mask |= POLLIN | POLLRDNORM; /* Readable */
}
/* Do we need condition here? */
mask |= POLLOUT | POLLWRNORM; /* Writable */
ftrace_print("%s: return mask = 0x%04x", __func__, mask);
return mask;
}
static ssize_t __bt_write(uint8_t *buf, size_t count, uint32_t flags)
{
int32_t retval = 0;
if (g_bt_trace_pt)
bt_dbg_tp_evt(TP_ACT_WR_IN, 0, count, buf);
retval = btmtk_send_data(g_sbdev->hdev, buf, count);
if (retval < 0)
BTMTK_ERR("bt_core_send_data failed, retval %d", retval);
else if (retval == 0) {
/*
* TX queue cannot be digested in time and no space is available for write.
*
* If nonblocking mode, return -EAGAIN to let user retry,
* native program should not call write with no delay.
*/
if (flags & O_NONBLOCK) {
BTMTK_WARN("Non-blocking write, no space is available!");
retval = -EAGAIN;
} else {
/*TODO: blocking write case */
}
} else
BTMTK_DBG("Write bytes %d/%zd", retval, count);
return retval;
}
static ssize_t BT_write_iter(struct kiocb *iocb, struct iov_iter *from)
{
ssize_t retval = 0;
size_t count = iov_iter_count(from);
ftrace_print("%s get called, count %zd", __func__, count);
down(&wr_mtx);
BTMTK_DBG("count %zd", count);
if (rstflag != CHIP_RESET_NONE) {
BTMTK_ERR("whole chip reset occurs! rstflag=%d", rstflag);
retval = -EIO;
goto OUT;
}
if (count > 0) {
if (count > BT_BUFFER_SIZE) {
BTMTK_WARN("Shorten write count from %zd to %d", count, BT_BUFFER_SIZE);
count = BT_BUFFER_SIZE;
}
if (copy_from_iter(o_buf, count, from) != count) {
retval = -EFAULT;
goto OUT;
}
retval = __bt_write(o_buf, count, iocb->ki_filp->f_flags);
}
OUT:
up(&wr_mtx);
return retval;
}
static ssize_t BT_write(struct file *filp, const char __user *buf, size_t count, loff_t *f_pos)
{
ssize_t retval = 0;
ftrace_print("%s get called, count %zd", __func__, count);
down(&wr_mtx);
BTMTK_DBG("count %zd pos %lld", count, *f_pos);
if (rstflag != CHIP_RESET_NONE) {
BTMTK_ERR("whole chip reset occurs! rstflag=%d", rstflag);
retval = -EIO;
goto OUT;
}
if (count > 0) {
if (count > BT_BUFFER_SIZE) {
BTMTK_WARN("Shorten write count from %zd to %d", count, BT_BUFFER_SIZE);
count = BT_BUFFER_SIZE;
}
if (copy_from_user(o_buf, buf, count)) {
retval = -EFAULT;
goto OUT;
}
retval = __bt_write(o_buf, count, filp->f_flags);
}
OUT:
up(&wr_mtx);
return retval;
}
static ssize_t BT_read(struct file *filp, char __user *buf, size_t count, loff_t *f_pos)
{
ssize_t retval = 0;
if (g_bt_trace_pt)
bt_dbg_tp_evt(TP_ACT_RD_IN, 0, count, NULL);
ftrace_print("%s get called, count %zd", __func__, count);
down(&rd_mtx);
BTMTK_DBG("%s: count %zd pos %lld", __func__, count, *f_pos);
if (rstflag != CHIP_RESET_NONE) {
while (rstflag != CHIP_RESET_END && rstflag != CHIP_RESET_NONE) {
/*
* If nonblocking mode, return -EIO directly.
* O_NONBLOCK is specified during open().
*/
if (filp->f_flags & O_NONBLOCK) {
BTMTK_ERR("Non-blocking read, whole chip reset occurs! rstflag=%d", rstflag);
retval = -EIO;
goto OUT;
}
wait_event(BT_wq, flag != 0);
flag = 0;
}
/*
* Reset end, send Hardware Error event to stack only once.
* To avoid high frequency read from stack before process is killed, set rstflag to 3
* to block poll and read after Hardware Error event is sent.
*/
retval = bt_report_hw_error(i_buf, count, &rd_offset);
if (rd_offset == sizeof(HCI_EVT_HW_ERROR)) {
rd_offset = 0;
rstflag = CHIP_RESET_NOTIFIED;
}
if (copy_to_user(buf, i_buf, retval)) {
retval = -EFAULT;
if (rstflag == CHIP_RESET_NOTIFIED)
rstflag = CHIP_RESET_END;
}
goto OUT;
}
if (count > BT_BUFFER_SIZE) {
BTMTK_WARN("Shorten read count from %zd to %d", count, BT_BUFFER_SIZE);
count = BT_BUFFER_SIZE;
}
do {
retval = btmtk_receive_data(g_sbdev->hdev, i_buf, count);
if (retval < 0) {
BTMTK_ERR("bt_core_receive_data failed, retval %d", retval);
goto OUT;
} else if (retval == 0) { /* Got nothing, wait for RX queue's signal */
/*
* If nonblocking mode, return -EAGAIN to let user retry.
* O_NONBLOCK is specified during open().
*/
if (filp->f_flags & O_NONBLOCK) {
BTMTK_ERR("Non-blocking read, no data is available!");
retval = -EAGAIN;
goto OUT;
}
wait_event(BT_wq, flag != 0);
flag = 0;
} else { /* Got something from RX queue */
if (g_bt_trace_pt)
bt_dbg_tp_evt(TP_ACT_RD_OUT, 0, retval, i_buf);
break;
}
} while (btmtk_rx_data_valid() && rstflag == CHIP_RESET_NONE);
if (retval == 0) {
if (rstflag != CHIP_RESET_END) { /* Should never happen */
WARN(1, "Blocking read is waken up in unexpected case, rstflag=%d", rstflag);
retval = -EIO;
goto OUT;
} else { /* Reset end, send Hardware Error event only once */
retval = bt_report_hw_error(i_buf, count, &rd_offset);
if (rd_offset == sizeof(HCI_EVT_HW_ERROR)) {
rd_offset = 0;
rstflag = CHIP_RESET_NOTIFIED;
}
}
}
if (copy_to_user(buf, i_buf, retval)) {
retval = -EFAULT;
if (rstflag == CHIP_RESET_NOTIFIED)
rstflag = CHIP_RESET_END;
}
OUT:
up(&rd_mtx);
return retval;
}
int _ioctl_copy_evt_to_buf(uint8_t *buf, int len)
{
BTMTK_INFO("%s", __func__);
memset(ioc_buf, 0x00, sizeof(ioc_buf));
ioc_buf[0] = 0x04; // evt packet type
memcpy(ioc_buf + 1, buf, len); // copy evt to ioctl buffer
BTMTK_INFO_RAW(ioc_buf, len + 1, "%s: len[%d] RX: ", __func__, len + 1);
return 0;
}
static long BT_unlocked_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
int32_t retval = 0;
BTMTK_INFO("%s: cmd[0x%08x]", __func__, cmd);
memset(ioc_buf, 0x00, sizeof(ioc_buf));
switch (cmd) {
case COMBO_IOCTL_BT_HOST_DEBUG:
/* input: arg(buf_size = 32): id[0:3], value[4:7], desc[8:31]
output: none
*/
if (copy_from_user(ioc_buf, (uint8_t __user*)arg, IOCTL_BT_HOST_DEBUG_BUF_SIZE))
retval = -EFAULT;
else {
uint32_t* pint32 = (uint32_t*)&ioc_buf[0];
BTMTK_INFO("%s: id[%x], value[0x%08x], desc[%s]", __func__, pint32[0], pint32[1], &ioc_buf[8]);
bthost_debug_save(pint32[0], pint32[1], (char*)&ioc_buf[8]);
}
break;
case COMBO_IOCTL_BT_INTTRX:
/* input: arg(buf_size = 128): hci cmd raw data
output: arg(buf_size = 128): hci evt raw data
*/
if (copy_from_user(ioc_buf, (uint8_t __user*)arg, IOCTL_BT_HOST_INTTRX_SIZE))
retval = -EFAULT;
else {
BTMTK_INFO_RAW(ioc_buf, ioc_buf[3] + 4, "%s: len[%d] TX: ", __func__, ioc_buf[3] + 4);
/* DynamicAdjustTxPower function */
if (ioc_buf[0] == 0x01 && ioc_buf[1] == 0x2D && ioc_buf[2] == 0xFC) {
if (btmtk_inttrx_DynamicAdjustTxPower(ioc_buf[4], ioc_buf[5], _ioctl_copy_evt_to_buf, TRUE) == 0) {
if (copy_to_user((uint8_t __user*)arg, ioc_buf, IOCTL_BT_HOST_INTTRX_SIZE))
retval = -EFAULT;
}
} else
retval = -EFAULT;
}
break;
default:
break;
}
return retval;
}
static long BT_compat_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
return BT_unlocked_ioctl(filp, cmd, arg);
}
static int BT_open(struct inode *inode, struct file *file)
{
int32_t ret;
bt_hold_wake_lock(&bt_wakelock);
BTMTK_INFO("major %d minor %d (pid %d)", imajor(inode), iminor(inode), current->pid);
/* Turn on BT */
ret = g_sbdev->hdev->open(g_sbdev->hdev);
if (ret) {
BTMTK_ERR("BT turn on fail!");
bt_release_wake_lock(&bt_wakelock);
return ret;
}
BTMTK_INFO("BT turn on OK!");
btmtk_register_rx_event_cb(g_sbdev->hdev, BT_event_cb);
bt_ftrace_flag = 1;
bt_release_wake_lock(&bt_wakelock);
bthost_debug_init();
return 0;
}
static int BT_close(struct inode *inode, struct file *file)
{
int32_t ret;
bt_hold_wake_lock(&bt_wakelock);
BTMTK_INFO("major %d minor %d (pid %d)", imajor(inode), iminor(inode), current->pid);
bt_ftrace_flag = 0;
//bt_core_unregister_rx_event_cb();
ret = g_sbdev->hdev->close(g_sbdev->hdev);
bt_release_wake_lock(&bt_wakelock);
bthost_debug_init();
if (ret) {
BTMTK_ERR("BT turn off fail!");
return ret;
}
BTMTK_INFO("BT turn off OK!");
return 0;
}
const struct file_operations BT_fops = {
.open = BT_open,
.release = BT_close,
.read = BT_read,
.write = BT_write,
.write_iter = BT_write_iter,
/* .ioctl = BT_ioctl, */
.unlocked_ioctl = BT_unlocked_ioctl,
.compat_ioctl = BT_compat_ioctl,
.poll = BT_poll
};
int BT_init(void)
{
int32_t alloc_err = 0;
int32_t cdv_err = 0;
dev_t dev = MKDEV(BT_major, 0);
sema_init(&wr_mtx, 1);
sema_init(&rd_mtx, 1);
init_waitqueue_head(&inq);
/* Initialize wake lock for I/O operation */
strncpy(bt_wakelock.name, "bt_drv_io", 9);
bt_wakelock.name[9] = 0;
bt_wake_lock_init(&bt_wakelock);
g_btif_dev.state_change_cb[0] = fw_log_bt_state_cb;
g_btif_dev.state_change_cb[1] = bt_state_cb;
/* Allocate char device */
alloc_err = register_chrdev_region(dev, BT_devs, BT_DRIVER_NAME);
if (alloc_err) {
BTMTK_ERR("Failed to register device numbers");
goto alloc_error;
}
cdev_init(&BT_cdev, &BT_fops);
BT_cdev.owner = THIS_MODULE;
cdv_err = cdev_add(&BT_cdev, dev, BT_devs);
if (cdv_err)
goto cdv_error;
BT_class = class_create(THIS_MODULE, BT_DRIVER_NODE_NAME);
if (IS_ERR(BT_class))
goto create_node_error;
BT_dev = device_create(BT_class, NULL, dev, NULL, BT_DRIVER_NODE_NAME);
if (IS_ERR(BT_dev))
goto create_node_error;
BTMTK_INFO("%s driver(major %d) installed", BT_DRIVER_NAME, BT_major);
return 0;
create_node_error:
if (BT_class && !IS_ERR(BT_class)) {
class_destroy(BT_class);
BT_class = NULL;
}
cdev_del(&BT_cdev);
cdv_error:
unregister_chrdev_region(dev, BT_devs);
alloc_error:
main_driver_exit();
return -1;
}
void BT_exit(void)
{
dev_t dev = MKDEV(BT_major, 0);
if (BT_dev && !IS_ERR(BT_dev)) {
device_destroy(BT_class, dev);
BT_dev = NULL;
}
if (BT_class && !IS_ERR(BT_class)) {
class_destroy(BT_class);
BT_class = NULL;
}
cdev_del(&BT_cdev);
unregister_chrdev_region(dev, BT_devs);
g_btif_dev.state_change_cb[0] = NULL;
g_btif_dev.state_change_cb[1] = NULL;
/* Destroy wake lock */
bt_wake_lock_deinit(&bt_wakelock);
BTMTK_INFO("%s driver removed", BT_DRIVER_NAME);
}
#else
int BT_init(void) {
BTMTK_INFO("%s: not device node, return", __func__);
return 0;
}
void BT_exit(void) {
BTMTK_INFO("%s: not device node, return", __func__);
return;
}
#endif // USE_DEVICE_NODE
#ifdef MTK_WCN_REMOVE_KERNEL_MODULE
/* build-in mode */
int mtk_wcn_stpbt_drv_init(void)
{
return main_driver_init();
}
EXPORT_SYMBOL(mtk_wcn_stpbt_drv_init);
void mtk_wcn_stpbt_drv_exit(void)
{
return main_driver_exit();
}
EXPORT_SYMBOL(mtk_wcn_stpbt_drv_exit);
#endif

View File

@@ -0,0 +1,897 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2019 MediaTek Inc.
*/
#include <linux/kthread.h>
#include <linux/io.h>
#include <linux/uaccess.h>
#include <linux/proc_fs.h>
#include <linux/gpio.h>
#include <linux/of.h>
#include <linux/of_gpio.h>
#include "btmtk_define.h"
#include "btmtk_chip_if.h"
#include "btmtk_main.h"
#include "conninfra.h"
#include "connsys_debug_utility.h"
#include "metlog.h"
/*******************************************************************************
* C O N S T A N T S
********************************************************************************
*/
#define BT_DBG_PROCNAME "driver/bt_dbg"
#define BUF_LEN_MAX 384
#define BT_DBG_DUMP_BUF_SIZE 1024
#define BT_DBG_PASSWD "4w2T8M65K5?2af+a "
#define BT_DBG_USER_TRX_PREFIX "[user-trx] "
/*******************************************************************************
* D A T A T Y P E S
********************************************************************************
*/
typedef int(*BT_DEV_DBG_FUNC) (int par1, int par2, int par3);
typedef struct {
BT_DEV_DBG_FUNC func;
bool turn_off_availavle; // function can be work when bt off
} tBT_DEV_DBG_STRUCT;
/*******************************************************************************
* P U B L I C D A T A
********************************************************************************
*/
bool g_bt_trace_pt = FALSE;
/*******************************************************************************
* F U N C T I O N D E C L A R A T I O N S
********************************************************************************
*/
static ssize_t bt_dbg_write(struct file *filp, const char __user *buf, size_t count, loff_t *f_pos);
static ssize_t bt_dbg_read(struct file *filp, char __user *buf, size_t count, loff_t *f_pos);
static int bt_dbg_hwver_get(int par1, int par2, int par3);
static int bt_dbg_chip_rst(int par1, int par2, int par3);
static int bt_dbg_read_chipid(int par1, int par2, int par3);
static int bt_dbg_force_bt_wakeup(int par1, int par2, int par3);
static int bt_dbg_get_fwp_datetime(int par1, int par2, int par3);
static int bt_dbg_get_bt_patch_path(int par1, int par2, int par3);
extern int fwp_if_get_datetime(char *buf, int max_len);
extern int fwp_if_get_bt_patch_path(char *buf, int max_len);
#if (CUSTOMER_FW_UPDATE == 1)
static int bt_dbg_set_fwp_update_enable(int par1, int par2, int par3);
static int bt_dbg_get_fwp_update_info(int par1, int par2, int par3);
extern void fwp_if_set_update_enable(int par);
extern int fwp_if_get_update_info(char *buf, int max_len);
#endif
static int bt_dbg_reg_read(int par1, int par2, int par3);
static int bt_dbg_reg_write(int par1, int par2, int par3);
static int bt_dbg_ap_reg_read(int par1, int par2, int par3);
static int bt_dbg_ap_reg_write(int par1, int par2, int par3);
static int bt_dbg_setlog_level(int par1, int par2, int par3);
static int bt_dbg_set_rt_thread(int par1, int par2, int par3);
static int bt_dbg_get_bt_state(int par1, int par2, int par3);
static int bt_dbg_rx_buf_control(int par1, int par2, int par3);
static int bt_dbg_set_rt_thread_runtime(int par1, int par2, int par3);
static int bt_dbg_fpga_test(int par1, int par2, int par3);
static int bt_dbg_is_adie_work(int par1, int par2, int par3);
static int bt_dbg_met_start_stop(int par1, int par2, int par3);
static int bt_dbg_DynamicAdjustTxPower(int par1, int par2, int par3);
static void bt_dbg_user_trx_proc(char *cmd_raw);
static int bt_dbg_user_trx_cb(uint8_t *buf, int len);
static int bt_dbg_trace_pt(int par1, int par2, int par3);
extern int32_t btmtk_set_wakeup(struct hci_dev *hdev, uint8_t need_wait);
extern int32_t btmtk_set_sleep(struct hci_dev *hdev, u_int8_t need_wait);
extern void bt_trigger_reset(void);
extern int32_t btmtk_set_power_on(struct hci_dev*, u_int8_t for_precal);
extern int32_t btmtk_set_power_off(struct hci_dev*, u_int8_t for_precal);
/*******************************************************************************
* P R I V A T E D A T A
********************************************************************************
*/
extern struct btmtk_dev *g_sbdev;
extern struct bt_dbg_st g_bt_dbg_st;
static struct proc_dir_entry *g_bt_dbg_entry;
static struct mutex g_bt_lock;
static char g_bt_dump_buf[BT_DBG_DUMP_BUF_SIZE];
static char *g_bt_dump_buf_ptr;
static int g_bt_dump_buf_len;
static bool g_bt_dbg_enable = FALSE;
static const tBT_DEV_DBG_STRUCT bt_dev_dbg_struct[] = {
[0x0] = {bt_dbg_hwver_get, FALSE},
[0x1] = {bt_dbg_chip_rst, FALSE},
[0x2] = {bt_dbg_read_chipid, FALSE},
[0x3] = {bt_dbg_force_bt_wakeup, FALSE},
[0x4] = {bt_dbg_reg_read, FALSE},
[0x5] = {bt_dbg_reg_write, FALSE},
[0x6] = {bt_dbg_get_fwp_datetime, TRUE},
#if (CUSTOMER_FW_UPDATE == 1)
[0x7] = {bt_dbg_set_fwp_update_enable, TRUE},
[0x8] = {bt_dbg_get_fwp_update_info, FALSE},
#endif
[0x9] = {bt_dbg_ap_reg_read, FALSE},
[0xa] = {bt_dbg_ap_reg_write, TRUE},
[0xb] = {bt_dbg_setlog_level, TRUE},
[0xc] = {bt_dbg_get_bt_patch_path, TRUE},
[0xd] = {bt_dbg_set_rt_thread, TRUE},
[0xe] = {bt_dbg_get_bt_state, TRUE},
[0xf] = {bt_dbg_rx_buf_control, TRUE},
[0x10] = {bt_dbg_set_rt_thread_runtime, FALSE},
[0x11] = {bt_dbg_fpga_test, TRUE},
[0x12] = {bt_dbg_is_adie_work, TRUE},
[0x13] = {bt_dbg_met_start_stop, FALSE},
[0x14] = {bt_dbg_DynamicAdjustTxPower, FALSE},
[0x15] = {bt_dbg_trace_pt, FALSE},
};
/*******************************************************************************
* F U N C T I O N S
********************************************************************************
*/
void _bt_dbg_reset_dump_buf(void)
{
memset(g_bt_dump_buf, '\0', BT_DBG_DUMP_BUF_SIZE);
g_bt_dump_buf_ptr = g_bt_dump_buf;
g_bt_dump_buf_len = 0;
}
int bt_dbg_hwver_get(int par1, int par2, int par3)
{
BTMTK_INFO("query chip version");
/* TODO: */
return 0;
}
int bt_dbg_chip_rst(int par1, int par2, int par3)
{
if(par2 == 0)
bt_trigger_reset();
else
conninfra_trigger_whole_chip_rst(CONNDRV_TYPE_BT, "bt_dbg");
return 0;
}
int bt_dbg_trace_pt(int par1, int par2, int par3)
{
if(par2 == 0)
g_bt_trace_pt = FALSE;
else
g_bt_trace_pt = TRUE;
return 0;
}
int bt_dbg_read_chipid(int par1, int par2, int par3)
{
return 0;
}
/* Read BGF SYS address (controller view) by 0x18001104 & 0x18900000 */
int bt_dbg_reg_read(int par1, int par2, int par3)
{
uint32_t *dynamic_remap_addr = NULL;
uint32_t *dynamic_remap_value = NULL;
/* TODO: */
dynamic_remap_addr = ioremap(0x18001104, 4);
if (dynamic_remap_addr) {
*dynamic_remap_addr = par2;
BTMTK_DBG("read address = [0x%08x]", par2);
} else {
BTMTK_ERR("ioremap 0x18001104 fail");
return -1;
}
iounmap(dynamic_remap_addr);
dynamic_remap_value = ioremap(0x18900000, 4);
if (dynamic_remap_value)
BTMTK_INFO("%s: 0x%08x value = [0x%08x]", __func__, par2,
*dynamic_remap_value);
else {
BTMTK_ERR("ioremap 0x18900000 fail");
return -1;
}
iounmap(dynamic_remap_value);
return 0;
}
/* Write BGF SYS address (controller view) by 0x18001104 & 0x18900000 */
int bt_dbg_reg_write(int par1, int par2, int par3)
{
uint32_t *dynamic_remap_addr = NULL;
uint32_t *dynamic_remap_value = NULL;
/* TODO: */
dynamic_remap_addr = ioremap(0x18001104, 4);
if (dynamic_remap_addr) {
*dynamic_remap_addr = par2;
BTMTK_DBG("write address = [0x%08x]", par2);
} else {
BTMTK_ERR("ioremap 0x18001104 fail");
return -1;
}
iounmap(dynamic_remap_addr);
dynamic_remap_value = ioremap(0x18900000, 4);
if (dynamic_remap_value)
*dynamic_remap_value = par3;
else {
BTMTK_ERR("ioremap 0x18900000 fail");
return -1;
}
iounmap(dynamic_remap_value);
return 0;
}
int bt_dbg_ap_reg_read(int par1, int par2, int par3)
{
uint32_t *remap_addr = NULL;
int ret_val = 0;
/* TODO: */
remap_addr = ioremap(par2, 4);
if (!remap_addr) {
BTMTK_ERR("ioremap [0x%08x] fail", par2);
return -1;
}
ret_val = *remap_addr;
BTMTK_INFO("%s: 0x%08x read value = [0x%08x]", __func__, par2, ret_val);
iounmap(remap_addr);
return ret_val;
}
int bt_dbg_ap_reg_write(int par1, int par2, int par3)
{
uint32_t *remap_addr = NULL;
/* TODO: */
remap_addr = ioremap(par2, 4);
if (!remap_addr) {
BTMTK_ERR("ioremap [0x%08x] fail", par2);
return -1;
}
*remap_addr = par3;
BTMTK_INFO("%s: 0x%08x write value = [0x%08x]", __func__, par2, par3);
iounmap(remap_addr);
return 0;
}
int bt_dbg_setlog_level(int par1, int par2, int par3)
{
if (par2 < BTMTK_LOG_LVL_ERR || par2 > BTMTK_LOG_LVL_DBG) {
btmtk_log_lvl = BTMTK_LOG_LVL_INFO;
} else {
btmtk_log_lvl = par2;
}
return 0;
}
int bt_dbg_set_rt_thread(int par1, int par2, int par3)
{
g_bt_dbg_st.rt_thd_enable = par2;
return 0;
}
int bt_dbg_set_rt_thread_runtime(int par1, int par2, int par3)
{
struct sched_param params;
int policy = 0;
int ret = 0;
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
/* reference parameter:
- normal: 0x10 0x01(SCHED_FIFO) 0x01
- rt_thd: 0x10 0x01(SCHED_FIFO) 0x50(MAX_RT_PRIO - 20)
*/
if (par2 > SCHED_DEADLINE || par3 > MAX_RT_PRIO) {
BTMTK_INFO("%s: parameter not allow!", __func__);
return 0;
}
policy = par2;
params.sched_priority = par3;
ret = sched_setscheduler(cif_dev->tx_thread, policy, &params);
BTMTK_INFO("%s: ret[%d], policy[%d], sched_priority[%d]", __func__, ret, policy, params.sched_priority);
return 0;
}
int bt_dbg_fpga_test(int par1, int par2, int par3)
{
/* reference parameter:
- 0x12 0x01(power on) 0x00
- 0x12 0x02(power off) 0x00
*/
BTMTK_INFO("%s: par2 = %d", __func__, par2);
switch (par2) {
case 1:
btmtk_set_power_on(g_sbdev->hdev, FALSE);
break;
case 2:
btmtk_set_power_off(g_sbdev->hdev, FALSE);
break;
default:
break;
}
BTMTK_INFO("%s: done", __func__);
return 0;
}
int bt_dbg_is_adie_work(int par1, int par2, int par3)
{
int ret = 0, adie_state = 0;
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
if (cif_dev->bt_state == FUNC_ON) {
adie_state = 0; // power on a-die pass
goto end;
}
ret = conninfra_pwr_on(CONNDRV_TYPE_BT);
//if ((ret == CONNINFRA_POWER_ON_A_DIE_FAIL) || (ret == CONNINFRA_POWER_ON_D_DIE_FAIL))
if (ret != 0)
adie_state = 1; // power on a-die fail, may be evb without DTB
else {
adie_state = 0; // power on a-die pass
conninfra_pwr_off(CONNDRV_TYPE_BT);
}
end:
BTMTK_INFO("%s: ret[%d], adie_state[%d]", __func__, ret, adie_state);
_bt_dbg_reset_dump_buf();
g_bt_dump_buf[0] = (adie_state == 0 ? '0' : '1'); // '0': adie pass, '1': adie fail
g_bt_dump_buf[1] = '\0';
g_bt_dump_buf_len = 2;
return 0;
}
int bt_dbg_met_start_stop(int par1, int par2, int par3)
{
uint32_t val = 0, star_addr = 0, end_addr = 0;
int res = 0;
struct conn_metlog_info info;
phys_addr_t emi_base;
BTMTK_INFO("%s, par2 = %d", __func__, par2);
/* reference parameter:
- start: 0x11 0x01 0x00
- stop: 0x11 0x00 0x00
*/
if (par2 == 0x01) {
/*
// Set EMI Writing Range
bt_dbg_ap_reg_write(0, 0x1882140C, 0xF0027000); // BGF_ON_MET_START_ADDR
bt_dbg_ap_reg_write(0, 0x18821410, 0xF002EFFF); // BGF_ON_MET_END_ADDR
*/
// Set Ring Buffer Mode
val = bt_dbg_ap_reg_read(0, 0x18821404, 0);
bt_dbg_ap_reg_write(0, 0x18821404, val | 0x0001); // BGF_ON_MET_CTL1[0] = 0x01
// Set Sampling Rate
val = bt_dbg_ap_reg_read(0, 0x18821400, 0);
bt_dbg_ap_reg_write(0, 0x18821400, (val & 0xFFFF80FF) | 0x00001900); // BGF_ON_MET_CTL0[14:8] = 0x19
// Set Mask Signal
//bt_dbg_ap_reg_write(0, 0x18821400, (val & 0x0000FFFF) | 0x????0000); // BGF_ON_MET_CTL0[31:16] = ?
// Enable Connsys MET
val = bt_dbg_ap_reg_read(0, 0x18821400, 0);
bt_dbg_ap_reg_write(0, 0x18821400, (val & 0xFFFFFFFC) | 0x00000003); // BGF_ON_MET_CTL0[1:0] = 0x03
/* write parameters and start MET test */
conninfra_get_phy_addr(&emi_base, NULL);
info.type = CONNDRV_TYPE_BT;
info.read_cr = 0x18821418;
info.write_cr = 0x18821414;
// FW will write the star_addr & end_addr to cooresponding CRs when bt on
star_addr = bt_dbg_ap_reg_read(0, 0x1882140C, 0);
end_addr = bt_dbg_ap_reg_read(0, 0x18821410, 0);
BTMTK_INFO("%s: star_addr[0x%08x], end_addr[0x%08x]", __func__, star_addr, end_addr);
if (star_addr >= 0x00400000 && star_addr <= 0x0041FFFF) {
// met data on sysram
info.met_base_ap = 0x18440000 + star_addr;
info.met_base_fw = star_addr;
} else if (star_addr >= 0xF0000000 && star_addr <= 0xF3FFFFFF){
// met data on emi
info.met_base_ap = emi_base + MET_EMI_ADDR;
info.met_base_fw = 0xF0000000 + MET_EMI_ADDR;
} else {
// error case
BTMTK_ERR("%s: get unexpected met address!!", __func__);
return 0;
}
info.met_size = end_addr - star_addr + 1;
info.output_len = 32;
res = conn_metlog_start(&info);
BTMTK_INFO("%s: conn_metlog_start, result = %d", __func__, res);
} else {
// stop MET test
res = conn_metlog_stop(CONNDRV_TYPE_BT);
BTMTK_INFO("%s: conn_metlog_stop, result = %d", __func__, res);
// Disable Connsys MET
val = bt_dbg_ap_reg_read(0, 0x18821400, 0);
bt_dbg_ap_reg_write(0, 0x18821400, val & 0xFFFFFFFE); // BGF_ON_MET_CTL0[0] = 0x00
}
return 0;
}
int bt_dbg_DynamicAdjustTxPower_cb(uint8_t *buf, int len)
{
BTMTK_INFO("%s", __func__);
bt_dbg_user_trx_cb(buf, len);
return 0;
}
int bt_dbg_DynamicAdjustTxPower(int par1, int par2, int par3)
{
uint8_t mode = (uint8_t)par2;
int8_t set_val = (int8_t)par3;
/* reference parameter:
- query: 0x14 0x01(query) 0x00
- set: 0x14 0x02(set) 0x??(set_dbm_val)
*/
BTMTK_INFO("%s", __func__);
btmtk_inttrx_DynamicAdjustTxPower(mode, set_val, bt_dbg_DynamicAdjustTxPower_cb, TRUE);
return 0;
}
/*
sample code to use gpio
int bt_dbg_device_is_evb(int par1, int par2, int par3)
{
struct device_node *node = NULL;
int gpio_addr = 0, gpio_val = 0;
node = of_find_compatible_node(NULL, NULL, "mediatek,evb_gpio");
gpio_addr = of_get_named_gpio(node, "evb_gpio", 0);
if (gpio_addr > 0)
gpio_val = gpio_get_value(gpio_addr); // 0x00: phone, 0x01: evb
BTMTK_INFO("%s: gpio_addr[%d], gpio_val[%d]", __func__, gpio_addr, gpio_val);
_bt_dbg_reset_dump_buf();
g_bt_dump_buf[0] = (gpio_val == 0 ? '0' : '1'); // 0x00: phone, 0x01: evb
g_bt_dump_buf[1] = '\0';
g_bt_dump_buf_len = 2;
return 0;
}
dts setting
evb_gpio: evb_gpio@1100c000 {
compatible = "mediatek,evb_gpio";
evb_gpio = <&pio 57 0x0>;
};
*/
int bt_dbg_get_bt_state(int par1, int par2, int par3)
{
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
bool bt_state = 0;
// 0x01: bt on, 0x00: bt off
bt_state = (cif_dev->bt_state == FUNC_ON ? 1 : 0);
BTMTK_INFO("%s: bt_state[%d]", __func__, bt_state);
_bt_dbg_reset_dump_buf();
g_bt_dump_buf[0] = bt_state;
g_bt_dump_buf[1] = '\0';
g_bt_dump_buf_len = 2;
return 0;
}
int bt_dbg_force_bt_wakeup(int par1, int par2, int par3)
{
int ret;
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
BTMTK_INFO("%s", __func__);
switch(par2) {
case 0:
cif_dev->psm.force_on = FALSE;
ret = btmtk_set_sleep(g_sbdev->hdev, TRUE);
break;
case 1:
cif_dev->psm.force_on = TRUE;
ret = btmtk_set_wakeup(g_sbdev->hdev, TRUE);
break;
default:
BTMTK_ERR("Not support");
return -1;
}
BTMTK_INFO("bt %s %s", (par2 == 1) ? "wakeup" : "sleep",
(ret) ? "fail" : "success");
return 0;
}
int bt_dbg_get_fwp_datetime(int par1, int par2, int par3)
{
_bt_dbg_reset_dump_buf();
g_bt_dump_buf_len = fwp_if_get_datetime(g_bt_dump_buf, BT_DBG_DUMP_BUF_SIZE);
return 0;
}
int bt_dbg_get_bt_patch_path(int par1, int par2, int par3)
{
_bt_dbg_reset_dump_buf();
g_bt_dump_buf_len = fwp_if_get_bt_patch_path(g_bt_dump_buf, BT_DBG_DUMP_BUF_SIZE);
return 0;
}
#if (CUSTOMER_FW_UPDATE == 1)
int bt_dbg_set_fwp_update_enable(int par1, int par2, int par3)
{
fwp_if_set_update_enable(par2);
return 0;
}
int bt_dbg_get_fwp_update_info(int par1, int par2, int par3)
{
_bt_dbg_reset_dump_buf();
g_bt_dump_buf_len = fwp_if_get_update_info(g_bt_dump_buf, BT_DBG_DUMP_BUF_SIZE);
return 0;
}
#endif
int bt_dbg_rx_buf_control(int par1, int par2, int par3)
{
/*
0x00: disable
0x01: wait rx buffer available for max 200ms
*/
BTMTK_INFO("%s: rx_buf_ctrl[%d] set to [%d]", __func__, g_bt_dbg_st.rx_buf_ctrl, par2);
g_bt_dbg_st.rx_buf_ctrl = par2;
return 0;
}
ssize_t bt_dbg_read(struct file *filp, char __user *buf, size_t count, loff_t *f_pos)
{
int ret = 0;
int dump_len;
BTMTK_INFO("%s: count[%zd]", __func__, count);
ret = mutex_lock_killable(&g_bt_lock);
if (ret) {
BTMTK_ERR("%s: dump_lock fail!!", __func__);
return ret;
}
if (g_bt_dump_buf_len == 0)
goto exit;
if (*f_pos == 0)
g_bt_dump_buf_ptr = g_bt_dump_buf;
dump_len = g_bt_dump_buf_len >= count ? count : g_bt_dump_buf_len;
ret = copy_to_user(buf, g_bt_dump_buf_ptr, dump_len);
if (ret) {
BTMTK_ERR("%s: copy to dump info buffer failed, ret:%d", __func__, ret);
ret = -EFAULT;
goto exit;
}
*f_pos += dump_len;
g_bt_dump_buf_len -= dump_len;
g_bt_dump_buf_ptr += dump_len;
BTMTK_INFO("%s: after read, wmt for dump info buffer len(%d)", __func__, g_bt_dump_buf_len);
ret = dump_len;
exit:
mutex_unlock(&g_bt_lock);
return ret;
}
int bt_osal_strtol(const char *str, unsigned int adecimal, long *res)
{
if (sizeof(long) == 4)
return kstrtou32(str, adecimal, (unsigned int *) res);
else
return kstrtol(str, adecimal, res);
}
int bt_dbg_user_trx_cb(uint8_t *buf, int len)
{
unsigned char *ptr = buf;
int i = 0;
_bt_dbg_reset_dump_buf();
// write event packet type
if (snprintf(g_bt_dump_buf, 6, "0x04 ") < 0) {
BTMTK_INFO("%s: snprintf error", __func__);
goto end;
}
for (i = 0; i < len; i++) {
if (snprintf(g_bt_dump_buf + 5*(i+1), 6, "0x%02X ", ptr[i]) < 0) {
BTMTK_INFO("%s: snprintf error", __func__);
goto end;
}
}
len++;
g_bt_dump_buf[5*len] = '\n';
g_bt_dump_buf[5*len + 1] = '\0';
g_bt_dump_buf_len = 5*len + 1;
end:
return 0;
}
void bt_dbg_user_trx_proc(char *cmd_raw)
{
#define LEN_64 64
unsigned char hci_cmd[LEN_64];
int len = 0;
long tmp = 0;
char *ptr = NULL, *pRaw = NULL;
// Parse command raw data
memset(hci_cmd, 0, sizeof(hci_cmd));
pRaw = cmd_raw;
ptr = cmd_raw;
while(*ptr != '\0' && pRaw != NULL) {
if (len > LEN_64 - 1) {
BTMTK_INFO("%s: skip since cmd length exceed!", __func__);
return;
}
ptr = strsep(&pRaw, " ");
if (ptr != NULL) {
bt_osal_strtol(ptr, 16, &tmp);
hci_cmd[len++] = (unsigned char)tmp;
}
}
// Send command and wait for command_complete event
btmtk_btif_internal_trx(hci_cmd, len, bt_dbg_user_trx_cb, TRUE, TRUE);
}
ssize_t bt_dbg_write(struct file *filp, const char __user *buffer, size_t count, loff_t *f_pos)
{
bool is_passwd = FALSE, is_turn_on = FALSE;
size_t len = count;
char buf[256], *pBuf;
int x = 0, y = 0, z = 0;
long res = 0;
char* pToken = NULL;
char* pDelimiter = " \t";
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
bool bt_state = 0;
bt_state = (cif_dev->bt_state == FUNC_ON ? 1 : 0);
if (len <= 0 || len >= sizeof(buf)) {
BTMTK_ERR("%s: input handling fail!", __func__);
len = sizeof(buf) - 1;
return -1;
}
memset(buf, 0, sizeof(buf));
if (copy_from_user(buf, buffer, len))
return -EFAULT;
buf[len] = '\0';
BTMTK_INFO("%s: bt_state[%d], dbg_enable[%d], len[%d]",
__func__, bt_state, g_bt_dbg_enable, (int)len);
/* Check debug function is enabled or not
* - not enable yet: user should enable it
* - already enabled: user can disable it
*/
if (len > strlen(BT_DBG_PASSWD) &&
0 == memcmp(buf, BT_DBG_PASSWD, strlen(BT_DBG_PASSWD))) {
is_passwd = TRUE;
if (0 == memcmp(buf + strlen(BT_DBG_PASSWD), "ON", strlen("ON")))
is_turn_on = TRUE;
}
if(!g_bt_dbg_enable) {
if(is_passwd && is_turn_on)
g_bt_dbg_enable = TRUE;
return len;
} else {
if(is_passwd && !is_turn_on) {
g_bt_dbg_enable = FALSE;
return len;
}
}
/* Mode 1: User trx flow: send command, get response */
if (0 == memcmp(buf, BT_DBG_USER_TRX_PREFIX, strlen(BT_DBG_USER_TRX_PREFIX))) {
if(!bt_state) // only work when bt on
return len;
buf[len - 1] = '\0';
bt_dbg_user_trx_proc(buf + strlen(BT_DBG_USER_TRX_PREFIX));
return len;
}
/* Mode 2: Debug cmd flow, parse three parameters */
pBuf = buf;
pToken = strsep(&pBuf, pDelimiter);
if (pToken != NULL) {
bt_osal_strtol(pToken, 16, &res);
x = (int)res;
} else {
x = 0;
}
pToken = strsep(&pBuf, "\t\n ");
if (pToken != NULL) {
bt_osal_strtol(pToken, 16, &res);
y = (int)res;
BTMTK_INFO("%s: y = 0x%08x", __func__, y);
} else {
y = 3000;
/*efuse, register read write default value */
if (0x5 == x || 0x6 == x)
y = 0x80000000;
}
pToken = strsep(&pBuf, "\t\n ");
if (pToken != NULL) {
bt_osal_strtol(pToken, 16, &res);
z = (int)res;
} else {
z = 10;
/*efuse, register read write default value */
if (0x5 == x || 0x6 == x)
z = 0xffffffff;
}
BTMTK_INFO("%s: x(0x%08x), y(0x%08x), z(0x%08x)", __func__, x, y, z);
if (ARRAY_SIZE(bt_dev_dbg_struct) > x && NULL != bt_dev_dbg_struct[x].func) {
if(!bt_state && !bt_dev_dbg_struct[x].turn_off_availavle) {
BTMTK_WARN("%s: command id(0x%08x) only work when bt on!", __func__, x);
} else {
(*bt_dev_dbg_struct[x].func) (x, y, z);
}
} else {
BTMTK_WARN("%s: command id(0x%08x) no handler defined!", __func__, x);
}
return len;
}
int bt_dev_dbg_init(void)
{
int i_ret = 0;
#if LINUX_VERSION_CODE >= KERNEL_VERSION(5, 6, 0)
static const struct proc_ops bt_dbg_fops = {
.proc_read = bt_dbg_read,
.proc_write = bt_dbg_write,
};
#else
static const struct file_operations bt_dbg_fops = {
.owner = THIS_MODULE,
.read = bt_dbg_read,
.write = bt_dbg_write,
};
#endif
// initialize debug function struct
g_bt_dbg_st.rt_thd_enable = FALSE;
g_bt_dbg_st.rx_buf_ctrl = TRUE;
g_bt_dbg_entry = proc_create(BT_DBG_PROCNAME, 0664, NULL, &bt_dbg_fops);
if (g_bt_dbg_entry == NULL) {
BTMTK_ERR("Unable to create [%s] bt proc entry", BT_DBG_PROCNAME);
i_ret = -1;
}
mutex_init(&g_bt_lock);
return i_ret;
}
int bt_dev_dbg_deinit(void)
{
mutex_destroy(&g_bt_lock);
if (g_bt_dbg_entry != NULL) {
proc_remove(g_bt_dbg_entry);
g_bt_dbg_entry = NULL;
}
return 0;
}
/*******************************************************************************
* bt host debug information for low power
********************************************************************************
*/
#define BTHOST_INFO_MAX 16
#define BTHOST_DESC_LEN 16
struct bthost_info{
uint32_t id; //0 for not used
char desc[BTHOST_DESC_LEN];
uint32_t value;
};
struct bthost_info bthost_info_table[BTHOST_INFO_MAX];
void bthost_debug_init(void)
{
uint32_t i = 0;
for (i = 0; i < BTHOST_INFO_MAX; i++){
bthost_info_table[i].id = 0;
bthost_info_table[i].desc[0] = '\0';
bthost_info_table[i].value = 0;
}
}
void bthost_debug_print(void)
{
uint32_t i = 0;
uint32_t ret = 0;
uint8_t *pos = NULL, *end = NULL;
uint8_t dump_buffer[700]={0};
pos = &dump_buffer[0];
end = pos + 700 - 1;
ret = snprintf(pos, (end - pos + 1), "[bt host info] ");
pos += ret;
for (i = 0; i < BTHOST_INFO_MAX; i++){
if (bthost_info_table[i].id == 0){
ret = snprintf(pos, (end - pos + 1),"[%d-%d] not set", i, BTHOST_INFO_MAX);
if (ret < 0 || ret >= (end - pos + 1)){
BTMTK_ERR("%s: snprintf fail i[%d] ret[%d]", __func__, i, ret);
break;
}
pos += ret;
break;
}
else {
ret = snprintf(pos, (end - pos + 1),"[%d][%s : 0x%08x] ", i,
bthost_info_table[i].desc,
bthost_info_table[i].value);
if (ret < 0 || ret >= (end - pos + 1)){
BTMTK_ERR("%s: snprintf fail i[%d] ret[%d]", __func__, i, ret);
break;
}
pos += ret;
}
}
BTMTK_INFO("%s", dump_buffer);
}
void bthost_debug_save(uint32_t id, uint32_t value, char* desc)
{
uint32_t i = 0;
if (id == 0) {
BTMTK_WARN("%s: id (%d) must > 0\n", __func__, id);
return;
}
for (i = 0; i < BTHOST_INFO_MAX; i++){
// if the id is existed, save to the same column
if (bthost_info_table[i].id == id){
bthost_info_table[i].value = value;
return;
}
// save to the new column
if (bthost_info_table[i].id == 0){
bthost_info_table[i].id = id;
strncpy(bthost_info_table[i].desc, desc, BTHOST_DESC_LEN - 1);
bthost_info_table[i].value = value;
return;
}
}
BTMTK_WARN("%s: no space for %d\n", __func__, id);
}

View File

@@ -0,0 +1,19 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2019 MediaTek Inc.
*/
#define CREATE_TRACE_POINTS
#include "btmtk_dbg_tp_evt.h"
#include "btmtk_dbg_tp_evt_if.h"
void bt_dbg_tp_evt(unsigned int pkt_action,
unsigned int parameter,
unsigned int data_len,
char *data)
{
//struct timespec64 kerneltime;
//ktime_get_ts64(&kerneltime);
trace_bt_evt(pkt_action, parameter, data_len, data);
}

View File

@@ -0,0 +1,445 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2019 MediaTek Inc.
*/
#include <linux/interrupt.h>
#include <linux/irqreturn.h>
#include <linux/of_irq.h>
#include <linux/of_address.h>
#include <linux/of.h>
#include "btmtk_chip_if.h"
#include "conninfra.h"
#include "connsys_debug_utility.h"
/*******************************************************************************
* C O N S T A N T S
********************************************************************************
*/
/*******************************************************************************
* D A T A T Y P E S
********************************************************************************
*/
/*******************************************************************************
* P U B L I C D A T A
********************************************************************************
*/
#if IS_ENABLED(CONFIG_MTK_IRQ_MONITOR_DEBUG)
unsigned long long irq_timer[12] = {0};
#endif
/*******************************************************************************
* P R I V A T E D A T A
********************************************************************************
*/
extern struct btmtk_dev *g_sbdev;
static struct bt_irq_ctrl bgf2ap_btif_wakeup_irq = {.name = "BTIF_WAKEUP_IRQ"};
static struct bt_irq_ctrl bgf2ap_sw_irq = {.name = "BGF_SW_IRQ"};
static struct bt_irq_ctrl bt_conn2ap_sw_irq = {.name = "BUS_SW_IRQ"};
static struct bt_irq_ctrl *bt_irq_table[BGF2AP_IRQ_MAX];
static struct work_struct rst_trigger_work;
/*******************************************************************************
* F U N C T I O N D E C L A R A T I O N S
********************************************************************************
*/
/*******************************************************************************
* F U N C T I O N S
********************************************************************************
*/
/* bt_reset_work
*
* A work thread that handles BT subsys reset request
*
* Arguments:
* [IN] work
*
* Return Value:
* N/A
*
*/
static void bt_reset_work(struct work_struct *work)
{
BTMTK_INFO("Trigger subsys reset");
bt_chip_reset_flow(RESET_LEVEL_0_5, CONNDRV_TYPE_BT, "BT Subsys reset");
}
/* bt_trigger_reset
*
* Trigger reset (could be subsys or whole chip reset)
*
* Arguments:
* N/A
*
* Return Value:
* N/A
*
*/
void bt_trigger_reset(void)
{
int32_t ret = conninfra_is_bus_hang();
BTMTK_INFO("%s: conninfra_is_bus_hang ret = %d", __func__, ret);
if (ret > 0)
conninfra_trigger_whole_chip_rst(CONNDRV_TYPE_BT, "bus hang");
else if (ret == CONNINFRA_ERR_RST_ONGOING)
BTMTK_INFO("whole chip reset is onging, skip subsys reset");
else
schedule_work(&rst_trigger_work);
}
/* bt_bgf2ap_irq_handler
*
* Handling BGF2AP_SW_IRQ, include FW log & chip reset
* Please be noticed this handler is running in bt thread
* not interrupt thread
*
* Arguments:
* N/A
*
* Return Value:
* N/A
*
*/
void bt_bgf2ap_irq_handler(void)
{
int32_t bgf_status = 0, count = 5;
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
cif_dev->bgf2ap_ind = FALSE;
/* wake up conn_infra off */
if(bgfsys_check_conninfra_ready())
return;
/* Read IRQ status CR to identify what happens */
bgf_status = bgfsys_get_sw_irq_status();
/* release conn_infra force on */
CLR_BIT(CONN_INFRA_WAKEUP_BT, BIT(0));
if (bgf_status == RET_SWIRQ_ST_FAIL)
return;
if (bgf_status && !(bgf_status & BGF_FW_LOG_NOTIFY)) {
BTMTK_INFO("bgf_status = 0x%08x", bgf_status);
}else{
BTMTK_DBG("bgf_status = 0x%08x", bgf_status);
}
if (bgf_status == 0xDEADFEED) {
bt_dump_bgfsys_all();
bt_enable_irq(BGF2AP_SW_IRQ);
} else if (bgf_status & BGF_SUBSYS_CHIP_RESET) {
if (cif_dev->rst_level != RESET_LEVEL_NONE)
complete(&cif_dev->rst_comp);
else
schedule_work(&rst_trigger_work);
} else if (bgf_status & BGF_FW_LOG_NOTIFY) {
/* FW notify host to get FW log */
connsys_log_irq_handler(CONN_DEBUG_TYPE_BT);
while(count--){};
bt_enable_irq(BGF2AP_SW_IRQ);
} else if (bgf_status & BGF_WHOLE_CHIP_RESET) {
conninfra_trigger_whole_chip_rst(CONNDRV_TYPE_BT, "FW trigger");
} else {
bt_enable_irq(BGF2AP_SW_IRQ);
}
}
/* bt_conn2ap_irq_handler
*
* Handling BT_CONN2AP_SW_IRQ, include BGF bus hang. And dump SSPM TIMER
* Please be noticed this handler is running in bt thread
* not interrupt thread
*
* Arguments:
* N/A
*
* Return Value:
* N/A
*
*/
void bt_conn2ap_irq_handler(void)
{
uint32_t value = 0;
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
cif_dev->bt_conn2ap_ind = FALSE;
value = bt_read_cr(BT_SSPM_TIMER);
BTMTK_INFO("%s: [SSPM] [0x%08x] = [0x%08x]", __func__, BT_SSPM_TIMER, value);
bt_trigger_reset();
}
/* btmtk_reset_init()
*
* Inint work thread for subsys chip reset
*
* Arguments:
* N/A
*
* Return Value:
* N/A
*
*/
void btmtk_reset_init(void)
{
INIT_WORK(&rst_trigger_work, bt_reset_work);
}
/* btmtk_irq_handler()
*
* IRQ handler, process following types IRQ
* BGF2AP_BTIF_WAKEUP_IRQ - this IRQ indicates that FW has data to transmit
* BGF2AP_SW_IRQ - this indicates that fw assert / fw log
*
* Arguments:
* [IN] irq - IRQ number
* [IN] arg -
*
* Return Value:
* returns IRQ_HANDLED for handled IRQ, IRQ_NONE otherwise
*
*/
static irqreturn_t btmtk_irq_handler(int irq, void * arg)
{
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
#if IS_ENABLED(CONFIG_MTK_IRQ_MONITOR_DEBUG)
irq_timer[0] = sched_clock();
#endif
if (irq == bgf2ap_btif_wakeup_irq.irq_num) {
if (cif_dev->rst_level == RESET_LEVEL_NONE) {
#if IS_ENABLED(CONFIG_MTK_IRQ_MONITOR_DEBUG)
irq_timer[1] = sched_clock();
#endif
bt_disable_irq(BGF2AP_BTIF_WAKEUP_IRQ);
#if IS_ENABLED(CONFIG_MTK_IRQ_MONITOR_DEBUG)
irq_timer[7] = sched_clock();
#endif
cif_dev->rx_ind = TRUE;
cif_dev->psm.sleep_flag = FALSE;
wake_up_interruptible(&cif_dev->tx_waitq);
#if IS_ENABLED(CONFIG_MTK_IRQ_MONITOR_DEBUG)
irq_timer[10] = sched_clock();
if (irq_timer[10] - irq_timer[1] > 5000000){
BTMTK_ERR("btif: start1[%llu] b_dis2[%llu] in_dis3[%llu] b_lock4[%llu] a_lock5[%llu] b_unlock6[%llu] a_unlock7[%llu] a_dis8[%llu] end11[%llu]", irq_timer[0], irq_timer[1], irq_timer[2], irq_timer[3], irq_timer[4], irq_timer[5], irq_timer[6], irq_timer[7], irq_timer[10]);
}
#endif
}
return IRQ_HANDLED;
} else if (irq == bgf2ap_sw_irq.irq_num) {
#if IS_ENABLED(CONFIG_MTK_IRQ_MONITOR_DEBUG)
irq_timer[8] = sched_clock();
#endif
bt_disable_irq(BGF2AP_SW_IRQ);
#if IS_ENABLED(CONFIG_MTK_IRQ_MONITOR_DEBUG)
irq_timer[9] = sched_clock();
#endif
cif_dev->bgf2ap_ind = TRUE;
wake_up_interruptible(&cif_dev->tx_waitq);
#if IS_ENABLED(CONFIG_MTK_IRQ_MONITOR_DEBUG)
irq_timer[11] = sched_clock();
if (irq_timer[11] - irq_timer[8] > 5000000){
BTMTK_ERR("sw: start1[%llu] b_dis9[%llu] in_dis3[%llu] b_lock4[%llu] a_lock5[%llu] b_unlock6[%llu] a_unlock7[%llu] a_dis10[%llu] end11[%llu]", irq_timer[0], irq_timer[8], irq_timer[2], irq_timer[3], irq_timer[4], irq_timer[5], irq_timer[6], irq_timer[9], irq_timer[11]);
}
#endif
return IRQ_HANDLED;
} else if (irq == bt_conn2ap_sw_irq.irq_num) {
bt_disable_irq(BT_CONN2AP_SW_IRQ);
cif_dev->bt_conn2ap_ind = TRUE;
wake_up_interruptible(&cif_dev->tx_waitq);
return IRQ_HANDLED;
}
return IRQ_NONE;
}
/* bt_request_irq()
*
* Request IRQ
*
* Arguments:
* [IN] irq_type - IRQ type
*
* Return Value:
* returns 0 for success, fail otherwise
*
*/
int32_t bt_request_irq(enum bt_irq_type irq_type)
{
uint32_t irq_num = 0;
int32_t ret = 0;
unsigned long irq_flags = 0;
struct bt_irq_ctrl *pirq = NULL;
struct device_node *node = NULL;
switch (irq_type) {
case BGF2AP_BTIF_WAKEUP_IRQ:
node = of_find_compatible_node(NULL, NULL, "mediatek,bt");
if (node) {
irq_num = irq_of_parse_and_map(node, 0);
BTMTK_DBG("irqNum of BGF2AP_BTIF_WAKEUP_IRQ = %d", irq_num);
}
else
BTMTK_ERR("WIFI-OF: get bt device node fail");
irq_flags = IRQF_TRIGGER_HIGH | IRQF_SHARED;
pirq = &bgf2ap_btif_wakeup_irq;
break;
case BGF2AP_SW_IRQ:
node = of_find_compatible_node(NULL, NULL, "mediatek,bt");
if (node) {
irq_num = irq_of_parse_and_map(node, 1);
BTMTK_DBG("irqNum of BGF2AP_SW_IRQ = %d", irq_num);
}
else
BTMTK_ERR("WIFI-OF: get bt device node fail");
irq_flags = IRQF_TRIGGER_HIGH | IRQF_SHARED;
pirq = &bgf2ap_sw_irq;
break;
case BT_CONN2AP_SW_IRQ:
node = of_find_compatible_node(NULL, NULL, "mediatek,bt");
if (node) {
irq_num = irq_of_parse_and_map(node, 2);
BTMTK_DBG("irqNum of BT_CONN2AP_SW_IRQ = %d", irq_num);
}
else
BTMTK_ERR("WIFI-OF: get bt device node fail");
irq_flags = IRQF_TRIGGER_HIGH | IRQF_SHARED;
pirq = &bt_conn2ap_sw_irq;
break;
default:
BTMTK_ERR("Invalid irq_type %d!", irq_type);
return -EINVAL;
}
pirq->irq_num = irq_num;
spin_lock_init(&pirq->lock);
pirq->active = TRUE;
ret = request_irq(irq_num, btmtk_irq_handler, irq_flags,
pirq->name, pirq);
if (ret) {
BTMTK_ERR("Request %s (%u) failed! ret(%d)", pirq->name, irq_num, ret);
pirq->active = FALSE;
return ret;
}
ret = enable_irq_wake(irq_num);
if (ret) {
BTMTK_ERR("enable_irq_wake %s (%u) failed! ret(%d)", pirq->name, irq_num, ret);
}
BTMTK_INFO("Request %s (%u) succeed, pirq = %p, flag = 0x%08x", pirq->name, irq_num, pirq, irq_flags);
bt_irq_table[irq_type] = pirq;
return 0;
}
/* bt_enable_irq()
*
* Enable IRQ
*
* Arguments:
* [IN] irq_type - IRQ type
*
* Return Value:
* N/A
*
*/
void bt_enable_irq(enum bt_irq_type irq_type)
{
struct bt_irq_ctrl *pirq;
if (irq_type >= BGF2AP_IRQ_MAX) {
BTMTK_ERR("Invalid irq_type %d!", irq_type);
return;
}
pirq = bt_irq_table[irq_type];
if (pirq) {
spin_lock_irqsave(&pirq->lock, pirq->flags);
if (!pirq->active) {
enable_irq(pirq->irq_num);
pirq->active = TRUE;
}
spin_unlock_irqrestore(&pirq->lock, pirq->flags);
}
}
/* bt_disable_irq()
*
* Disable IRQ
*
* Arguments:
* [IN] irq_type - IRQ type
*
* Return Value:
* N/A
*
*/
void bt_disable_irq(enum bt_irq_type irq_type)
{
struct bt_irq_ctrl *pirq;
#if IS_ENABLED(CONFIG_MTK_IRQ_MONITOR_DEBUG)
irq_timer[2] = sched_clock();
#endif
if (irq_type >= BGF2AP_IRQ_MAX) {
BTMTK_ERR("Invalid irq_type %d!", irq_type);
return;
}
pirq = bt_irq_table[irq_type];
if (pirq) {
#if IS_ENABLED(CONFIG_MTK_IRQ_MONITOR_DEBUG)
irq_timer[3] = sched_clock();
#endif
spin_lock_irqsave(&pirq->lock, pirq->flags);
#if IS_ENABLED(CONFIG_MTK_IRQ_MONITOR_DEBUG)
irq_timer[4] = sched_clock();
#endif
if (pirq->active) {
disable_irq_nosync(pirq->irq_num);
pirq->active = FALSE;
}
#if IS_ENABLED(CONFIG_MTK_IRQ_MONITOR_DEBUG)
irq_timer[5] = sched_clock();
#endif
spin_unlock_irqrestore(&pirq->lock, pirq->flags);
#if IS_ENABLED(CONFIG_MTK_IRQ_MONITOR_DEBUG)
irq_timer[6] = sched_clock();
#endif
}
}
/* bt_disable_irq()
*
* Release IRQ and de-register IRQ
*
* Arguments:
* [IN] irq_type - IRQ type
*
* Return Value:
* N/A
*
*/
void bt_free_irq(enum bt_irq_type irq_type)
{
struct bt_irq_ctrl *pirq;
if (irq_type >= BGF2AP_IRQ_MAX) {
BTMTK_ERR("Invalid irq_type %d!", irq_type);
return;
}
pirq = bt_irq_table[irq_type];
if (pirq) {
disable_irq_wake(pirq->irq_num);
free_irq(pirq->irq_num, pirq);
pirq->active = FALSE;
bt_irq_table[irq_type] = NULL;
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,585 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2019 MediaTek Inc.
*/
#include <linux/rtc.h>
#include "btmtk_chip_if.h"
#include "btmtk_main.h"
/*******************************************************************************
* D A T A T Y P E S
********************************************************************************
*/
/*******************************************************************************
* P U B L I C D A T A
********************************************************************************
*/
struct workqueue_struct *workqueue_task;
struct delayed_work work;
/*******************************************************************************
* P R I V A T E D A T A
********************************************************************************
*/
extern struct btmtk_dev *g_sbdev;
extern struct bt_dbg_st g_bt_dbg_st;
/*******************************************************************************
* F U N C T I O N D E C L A R A T I O N S
********************************************************************************
*/
#if (USE_DEVICE_NODE == 1)
uint8_t is_rx_queue_empty(void)
{
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
struct bt_ring_buffer_mgmt *p_ring = &cif_dev->rx_buffer;
spin_lock(&p_ring->lock);
if (p_ring->read_idx == p_ring->write_idx) {
spin_unlock(&p_ring->lock);
return TRUE;
} else {
spin_unlock(&p_ring->lock);
return FALSE;
}
}
static uint8_t is_rx_queue_res_available(uint32_t length)
{
uint32_t room_left;
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
struct bt_ring_buffer_mgmt *p_ring = &cif_dev->rx_buffer;
/*
* Get available space of RX Queue
*/
spin_lock(&p_ring->lock);
if (p_ring->read_idx <= p_ring->write_idx)
room_left = RING_BUFFER_SIZE - p_ring->write_idx + p_ring->read_idx - 1;
else
room_left = p_ring->read_idx - p_ring->write_idx - 1;
spin_unlock(&p_ring->lock);
if (room_left < length) {
BTMTK_WARN("RX queue room left (%u) < required (%u)", room_left, length);
return FALSE;
}
return TRUE;
}
static int32_t rx_pkt_enqueue(uint8_t *buffer, uint32_t length)
{
uint32_t tail_len;
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
struct bt_ring_buffer_mgmt *p_ring = &cif_dev->rx_buffer;
if (length > HCI_MAX_FRAME_SIZE) {
BTMTK_ERR("Abnormal packet length %u, not enqueue!", length);
return -EINVAL;
}
spin_lock(&p_ring->lock);
if (p_ring->write_idx + length < RING_BUFFER_SIZE) {
memcpy(p_ring->buf + p_ring->write_idx, buffer, length);
p_ring->write_idx += length;
} else {
tail_len = RING_BUFFER_SIZE - p_ring->write_idx;
memcpy(p_ring->buf + p_ring->write_idx, buffer, tail_len);
memcpy(p_ring->buf, buffer + tail_len, length - tail_len);
p_ring->write_idx = length - tail_len;
}
spin_unlock(&p_ring->lock);
return 0;
}
int32_t rx_skb_enqueue(struct sk_buff *skb)
{
#define WAIT_TIMES 40
int8_t i = 0;
int32_t ret = 0;
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
if ( !skb || skb->len == 0) {
BTMTK_WARN("Inavlid data event, skip, skb = NULL or skb len = 0");
ret = -1;
goto end;
}
/* FW will block the data if it's buffer is full,
driver can wait a interval for native process to read out */
if(g_bt_dbg_st.rx_buf_ctrl == TRUE) {
for(i = 0; i < WAIT_TIMES; i++) {
if (!is_rx_queue_res_available(skb->len + 1)) {
usleep_range(USLEEP_5MS_L, USLEEP_5MS_H);
} else
break;
}
}
if (!is_rx_queue_res_available(skb->len + 1)) {
BTMTK_WARN("rx packet drop!!!");
ret = -1;
goto end;
}
memcpy(skb_push(skb, 1), &bt_cb(skb)->pkt_type, 1);
ret = rx_pkt_enqueue(skb->data, skb->len);
if (!is_rx_queue_empty() && cif_dev->rx_event_cb)
cif_dev->rx_event_cb();
end:
if (skb)
kfree_skb(skb);
return ret;
}
void rx_dequeue(uint8_t *buffer, uint32_t size, uint32_t *plen)
{
uint32_t copy_len = 0, tail_len;
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
struct bt_ring_buffer_mgmt *p_ring = &cif_dev->rx_buffer;
spin_lock(&p_ring->lock);
if (p_ring->read_idx != p_ring->write_idx) {
/*
* RX Queue not empty,
* fill out the retrieving buffer untill it is full, or we have no data.
*/
if (p_ring->read_idx < p_ring->write_idx) {
copy_len = p_ring->write_idx - p_ring->read_idx;
if (copy_len > size)
copy_len = size;
memcpy(buffer, p_ring->buf + p_ring->read_idx, copy_len);
p_ring->read_idx += copy_len;
} else { /* read_idx > write_idx */
tail_len = RING_BUFFER_SIZE - p_ring->read_idx;
if (tail_len > size) { /* exclude equal case to skip wrap check */
copy_len = size;
memcpy(buffer, p_ring->buf + p_ring->read_idx, copy_len);
p_ring->read_idx += copy_len;
} else {
/* 1. copy tail */
memcpy(buffer, p_ring->buf + p_ring->read_idx, tail_len);
/* 2. check if head length is enough */
copy_len = (p_ring->write_idx < (size - tail_len))
? p_ring->write_idx : (size - tail_len);
/* 3. copy header */
memcpy(buffer + tail_len, p_ring->buf, copy_len);
p_ring->read_idx = copy_len;
/* 4. update copy length: head + tail */
copy_len += tail_len;
}
}
}
spin_unlock(&p_ring->lock);
*plen = copy_len;
return;
}
void rx_queue_flush(void)
{
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
struct bt_ring_buffer_mgmt *p_ring = &cif_dev->rx_buffer;
p_ring->read_idx = p_ring->write_idx = 0;
}
void rx_queue_initialize(void)
{
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
struct bt_ring_buffer_mgmt *p_ring = &cif_dev->rx_buffer;
p_ring->read_idx = p_ring->write_idx = 0;
spin_lock_init(&p_ring->lock);
}
void rx_queue_destroy(void)
{
rx_queue_flush();
}
/* Interface for device node mechanism */
void btmtk_rx_flush(void)
{
rx_queue_flush();
}
uint8_t btmtk_rx_data_valid(void)
{
return !is_rx_queue_empty();
}
void btmtk_register_rx_event_cb(struct hci_dev *hdev, BT_RX_EVENT_CB cb)
{
struct btmtk_dev *bdev = hci_get_drvdata(hdev);
struct btmtk_btif_dev *cif_dev = bdev->cif_dev;
cif_dev->rx_event_cb = cb;
btmtk_rx_flush();
}
int32_t btmtk_receive_data(struct hci_dev *hdev, uint8_t *buf, uint32_t count)
{
uint32_t read_bytes;
rx_dequeue(buf, count, &read_bytes);
/* TODO: disable quick PS mode by traffic density */
return read_bytes;
}
#endif // (USE_DEVICE_NODE == 1)
#if (DRIVER_CMD_CHECK == 1)
void cmd_list_initialize(void)
{
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
struct bt_cmd_queue *p_queue = NULL;
BTMTK_DBG("%s", __func__);
p_queue = &cif_dev->cmd_queue;
p_queue->head = NULL;
p_queue->tail = NULL;
p_queue->size = 0;
spin_lock_init(&p_queue->lock);
}
struct bt_cmd_node* cmd_free_node(struct bt_cmd_node* node)
{
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
struct bt_cmd_queue *p_queue = NULL;
struct bt_cmd_node* next = node->next;
p_queue = &cif_dev->cmd_queue;
kfree(node);
p_queue->size--;
return next;
}
bool cmd_list_isempty(void)
{
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
struct bt_cmd_queue *p_queue = NULL;
p_queue = &cif_dev->cmd_queue;
spin_lock(&p_queue->lock);
if(p_queue->size == 0) {
spin_unlock(&p_queue->lock);
return TRUE;
} else {
spin_unlock(&p_queue->lock);
return FALSE;
}
}
bool cmd_list_append (uint16_t opcode)
{
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
struct bt_cmd_queue *p_queue = NULL;
struct bt_cmd_node *node = kzalloc(sizeof(struct bt_cmd_node),GFP_KERNEL);
p_queue = &cif_dev->cmd_queue;
if (!node) {
BTMTK_ERR("%s create node fail",__func__);
return FALSE;
}
spin_lock(&p_queue->lock);
node->next = NULL;
node->opcode = opcode;
if(p_queue->tail == NULL){
p_queue->head = node;
p_queue->tail = node;
} else {
p_queue->tail->next = node;
p_queue->tail = node;
}
p_queue->size ++;
spin_unlock(&p_queue->lock);
return TRUE;
}
bool cmd_list_check(uint16_t opcode)
{
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
struct bt_cmd_queue *p_queue = NULL;
struct bt_cmd_node* curr = NULL;
p_queue = &cif_dev->cmd_queue;
if (cmd_list_isempty() == TRUE) return FALSE;
spin_lock(&p_queue->lock);
curr = p_queue->head;
while(curr){
if(curr->opcode == opcode){
spin_unlock(&p_queue->lock);
return TRUE;
}
curr=curr->next;
}
spin_unlock(&p_queue->lock);
return FALSE;
}
bool cmd_list_remove(uint16_t opcode)
{
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
struct bt_cmd_queue *p_queue = NULL;
struct bt_cmd_node* prev = NULL;
struct bt_cmd_node* curr = NULL;
p_queue = &cif_dev->cmd_queue;
if (cmd_list_isempty() == TRUE) return FALSE;
spin_lock(&p_queue->lock);
if(p_queue->head->opcode == opcode) {
struct bt_cmd_node* next = cmd_free_node(p_queue->head);
if (p_queue->head == p_queue->tail) p_queue->tail = NULL;
p_queue->head = next;
spin_unlock(&p_queue->lock);
return TRUE;
}
prev = p_queue->head;
curr = p_queue->head->next;
while(curr){
if(curr->opcode == opcode) {
prev->next = cmd_free_node(curr);
if(p_queue->tail == curr) p_queue->tail = prev;
spin_unlock(&p_queue->lock);
return TRUE;
}
prev = curr;
curr = curr->next;
}
BTMTK_ERR("%s No match opcode: %4X", __func__,opcode);
spin_unlock(&p_queue->lock);
return FALSE;
}
void cmd_list_destory(void)
{
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
struct bt_cmd_queue *p_queue = NULL;
struct bt_cmd_node* curr= NULL;
BTMTK_DBG("%s",__func__);
p_queue = &cif_dev->cmd_queue;
spin_lock(&p_queue->lock);
curr = p_queue->head;
while(curr){
curr = cmd_free_node(curr);
}
p_queue->head = NULL;
p_queue->tail = NULL;
p_queue->size = 0;
spin_unlock(&p_queue->lock);
}
void command_response_timeout(struct work_struct *pwork)
{
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
struct bt_cmd_queue *p_queue = NULL;
p_queue = &cif_dev->cmd_queue;
if (p_queue->size != 0) {
cif_dev->cmd_timeout_count++;
BTMTK_INFO("[%s] timeout [%d] sleep [%d] force_on [%d]", __func__,
cif_dev->cmd_timeout_count,
cif_dev->psm.sleep_flag,
cif_dev->psm.force_on);
btmtk_cif_dump_rxd_backtrace();
btmtk_cif_dump_fw_no_rsp(BT_BTIF_DUMP_REG);
if (cif_dev->cmd_timeout_count == 4) {
spin_lock(&p_queue->lock);
if (p_queue->head)
BTMTK_ERR("%s, !!!! Command Timeout !!!! opcode 0x%4X", __func__, p_queue->head->opcode);
else
BTMTK_ERR("%s, p_queue head is NULL", __func__);
spin_unlock(&p_queue->lock);
// To-do : Need to consider if it has any condition to check
cif_dev->cmd_timeout_count = 0;
bt_trigger_reset();
} else {
down(&cif_dev->cmd_tout_sem);
if(workqueue_task != NULL) {
queue_delayed_work(workqueue_task, &work, HZ>>1);
}
up(&cif_dev->cmd_tout_sem);
}
}
}
bool cmd_workqueue_init(void)
{
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
BTMTK_INFO("init workqueue");
workqueue_task = create_singlethread_workqueue("workqueue_task");
if(!workqueue_task){
BTMTK_ERR("fail to init workqueue");
return FALSE;
}
INIT_DELAYED_WORK(&work, command_response_timeout);
cif_dev->cmd_timeout_count = 0;
return TRUE;
}
void update_command_response_workqueue(void)
{
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
struct bt_cmd_queue *p_queue = NULL;
p_queue = &cif_dev->cmd_queue;
if (p_queue->size == 0){
BTMTK_DBG("command queue size = 0");
cancel_delayed_work(&work);
} else {
spin_lock(&p_queue->lock);
if (p_queue->head)
BTMTK_DBG("update new command queue : %4X" , p_queue->head->opcode);
else
BTMTK_ERR("%s, p_queue head is NULL", __func__);
spin_unlock(&p_queue->lock);
cif_dev->cmd_timeout_count = 0;
cancel_delayed_work(&work);
down(&cif_dev->cmd_tout_sem);
if(workqueue_task != NULL) {
queue_delayed_work(workqueue_task, &work, HZ>>1);
}
up(&cif_dev->cmd_tout_sem);
}
}
void cmd_workqueue_exit(void)
{
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
int ret_a = 0, ret_b = 0;
if(workqueue_task != NULL) {
ret_b = cancel_delayed_work(&work);
flush_workqueue(workqueue_task);
ret_a = cancel_delayed_work(&work);
BTMTK_INFO("cancel workqueue before[%d] after[%d] flush", ret_b, ret_a);
down(&cif_dev->cmd_tout_sem);
destroy_workqueue(workqueue_task);
workqueue_task = NULL;
up(&cif_dev->cmd_tout_sem);
}
}
#endif // (DRIVER_CMD_CHECK == 1)
const char* direction_tostring (enum bt_direction_type direction_type) {
char *type[] = {"NONE", "TX", "RX"};
return type[direction_type];
}
void dump_queue_initialize(void)
{
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
struct bt_dump_queue *d_queue = NULL;
BTMTK_INFO("init dumpqueue");
d_queue = &cif_dev->dump_queue;
d_queue->index = 0;
d_queue->full = 0;
spin_lock_init(&d_queue->lock);
memset(d_queue->queue, 0, MAX_DUMP_QUEUE_SIZE * sizeof(struct bt_dump_packet));
}
void add_dump_packet(const uint8_t *buffer,const uint32_t length, enum bt_direction_type type){
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
struct bt_dump_queue *d_queue = NULL;
uint32_t index = 0;
struct bt_dump_packet *p_packet = NULL;
uint32_t copysize;
d_queue = &cif_dev->dump_queue;
index = d_queue->index;
p_packet = &d_queue->queue[index];
spin_lock(&d_queue->lock);
if (length > MAX_DUMP_DATA_SIZE)
copysize = MAX_DUMP_DATA_SIZE;
else
copysize = length;
ktime_get_real_ts64(&p_packet->time);
ktime_get_ts64(&p_packet->kerneltime);
memcpy(p_packet->data,buffer,copysize);
p_packet->data_length = length;
p_packet->direction_type = type;
d_queue->index = (d_queue->index+1) % MAX_DUMP_QUEUE_SIZE;
BTMTK_DBG("index: %d", d_queue->index);
if (d_queue->full == FALSE && d_queue->index == 0)
d_queue->full = TRUE;
spin_unlock(&d_queue->lock);
}
void print_dump_packet(struct bt_dump_packet *p_packet){
int32_t copysize;
uint32_t sec, usec, ksec, knsec;
struct rtc_time tm;
sec = p_packet->time.tv_sec;
usec = p_packet->time.tv_nsec/1000;
ksec = p_packet->kerneltime.tv_sec;
knsec = p_packet->kerneltime.tv_nsec;
rtc_time64_to_tm(sec, &tm);
if (p_packet->data_length > MAX_DUMP_DATA_SIZE)
copysize = MAX_DUMP_DATA_SIZE;
else
copysize = p_packet->data_length;
BTMTK_INFO_RAW(p_packet->data, copysize, "Dump: Time:%02d:%02d:%02d.%06u, Kernel Time:%6d.%09u, %s, Size = %3d, Data: "
, tm.tm_hour+8, tm.tm_min, tm.tm_sec, usec, ksec, knsec
, direction_tostring(p_packet->direction_type), p_packet->data_length);
}
void show_all_dump_packet(void) {
struct btmtk_btif_dev *cif_dev = (struct btmtk_btif_dev *)g_sbdev->cif_dev;
struct bt_dump_queue *d_queue = NULL;
int32_t i, j, showsize;
struct bt_dump_packet *p_packet;
d_queue = &cif_dev->dump_queue;
spin_lock(&d_queue->lock);
if (d_queue->full == TRUE) {
showsize = MAX_DUMP_QUEUE_SIZE;
for(i = 0,j = d_queue->index; i < showsize; i++,j++) {
p_packet = &d_queue->queue[j % MAX_DUMP_QUEUE_SIZE];
print_dump_packet(p_packet);
}
} else {
showsize = d_queue->index;
for(i = 0; i < showsize; i++) {
p_packet = &d_queue->queue[i];
print_dump_packet(p_packet);
}
}
spin_unlock(&d_queue->lock);
}

View File

@@ -0,0 +1,7 @@
# load bt_drv
on property:vendor.connsys.driver.ready=yes
insmod /vendor/lib/modules/bt_drv_${ro.vendor.bt.platform}.ko
chown bluetooth bluetooth /proc/driver/bt_dbg
on property:vendor.connsys.driver.ready=no
insmod /vendor/lib/modules/bt_drv_${ro.vendor.bt.platform}.ko
chown bluetooth bluetooth /proc/driver/bt_dbg

View File

@@ -0,0 +1,261 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2019 MediaTek Inc.
*/
#include "btmtk_define.h"
#include "btmtk_main.h"
#include "btmtk_buffer_mode.h"
static struct btmtk_buffer_mode_struct btmtk_buffer_mode;
static int btmtk_buffer_mode_check_auto_mode(struct btmtk_buffer_mode_struct *buffer_mode)
{
u16 addr = 1;
u8 value = 0;
if (buffer_mode->efuse_mode != AUTO_MODE)
return 0;
if (btmtk_efuse_read(buffer_mode->bdev, addr, &value)) {
BTMTK_WARN("read fail");
BTMTK_WARN("Use EEPROM Bin file mode");
buffer_mode->efuse_mode = BIN_FILE_MODE;
return -EIO;
}
if (value == ((buffer_mode->bdev->chip_id & 0xFF00) >> 8)) {
BTMTK_WARN("get efuse[1]: 0x%02x", value);
BTMTK_WARN("use efuse mode");
buffer_mode->efuse_mode = EFUSE_MODE;
} else {
BTMTK_WARN("get efuse[1]: 0x%02x", value);
BTMTK_WARN("Use EEPROM Bin file mode");
buffer_mode->efuse_mode = BIN_FILE_MODE;
}
return 0;
}
static int btmtk_buffer_mode_parse_mode(uint8_t *buf, size_t buf_size)
{
int efuse_mode = EFUSE_MODE;
char *p_buf = NULL;
char *ptr = NULL, *p = NULL;
if (!buf) {
BTMTK_WARN("buf is null");
return efuse_mode;
} else if (buf_size < (strlen(BUFFER_MODE_SWITCH_FIELD) + 2)) {
BTMTK_WARN("incorrect buf size(%d)", (int)buf_size);
return efuse_mode;
}
p_buf = kmalloc(buf_size + 1, GFP_KERNEL);
if (!p_buf)
return efuse_mode;
memcpy(p_buf, buf, buf_size);
p_buf[buf_size] = '\0';
/* find string */
p = ptr = strstr(p_buf, BUFFER_MODE_SWITCH_FIELD);
if (!ptr) {
BTMTK_ERR("Can't find %s", BUFFER_MODE_SWITCH_FIELD);
goto out;
}
if (p > p_buf) {
p--;
while ((*p == ' ') && (p != p_buf))
p--;
if (*p == '#') {
BTMTK_ERR("It's not EEPROM - Bin file mode");
goto out;
}
}
/* check access mode */
ptr += (strlen(BUFFER_MODE_SWITCH_FIELD) + 1);
BTMTK_WARN("It's EEPROM bin mode: %c", *ptr);
efuse_mode = *ptr - '0';
if (efuse_mode > AUTO_MODE)
efuse_mode = EFUSE_MODE;
out:
kfree(p_buf);
return efuse_mode;
}
static int btmtk_buffer_mode_set_addr(struct btmtk_buffer_mode_struct *buffer_mode)
{
u8 cmd[SET_ADDRESS_CMD_LEN] = {0x01, 0x1A, 0xFC, 0x06, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
u8 event[SET_ADDRESS_EVT_LEN] = {0x04, 0x0E, 0x04, 0x01, 0x1A, 0xFC, 0x00};
int ret = 0;
if (buffer_mode->bt0_mac[0] == 0x00 && buffer_mode->bt0_mac[1] == 0x00
&& buffer_mode->bt0_mac[2] == 0x00 && buffer_mode->bt0_mac[3] == 0x00
&& buffer_mode->bt0_mac[4] == 0x00 && buffer_mode->bt0_mac[5] == 0x00) {
BTMTK_WARN("BDAddr is Zero, not set");
} else {
cmd[SET_ADDRESS_CMD_PAYLOAD_OFFSET + 5] = buffer_mode->bt0_mac[0];
cmd[SET_ADDRESS_CMD_PAYLOAD_OFFSET + 4] = buffer_mode->bt0_mac[1];
cmd[SET_ADDRESS_CMD_PAYLOAD_OFFSET + 3] = buffer_mode->bt0_mac[2];
cmd[SET_ADDRESS_CMD_PAYLOAD_OFFSET + 2] = buffer_mode->bt0_mac[3];
cmd[SET_ADDRESS_CMD_PAYLOAD_OFFSET + 1] = buffer_mode->bt0_mac[4];
cmd[SET_ADDRESS_CMD_PAYLOAD_OFFSET] = buffer_mode->bt0_mac[5];
BTMTK_INFO_RAW(cmd, SET_ADDRESS_CMD_LEN, "%s: Send", __func__);
ret = btmtk_main_send_cmd(buffer_mode->bdev,
cmd, SET_ADDRESS_CMD_LEN,
event, SET_ADDRESS_EVT_LEN,
0, 0, BTMTK_TX_CMD_FROM_DRV);
}
BTMTK_INFO("%s done", __func__);
return ret;
}
static int btmtk_buffer_mode_set_radio(struct btmtk_buffer_mode_struct *buffer_mode)
{
u8 cmd[SET_RADIO_CMD_LEN] = {0x01, 0x2C, 0xFC, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
u8 event[SET_RADIO_EVT_LEN] = {0x04, 0x0E, 0x04, 0x01, 0x2C, 0xFC, 0x00};
int ret = 0;
cmd[SET_RADIO_CMD_EDR_DEF_OFFSET] = buffer_mode->bt0_radio.radio_0 & 0x3F; /* edr_init_pwr */
cmd[SET_RADIO_CMD_BLE_OFFSET] = buffer_mode->bt0_radio.radio_2 & 0x3F; /* ble_default_pwr */
cmd[SET_RADIO_CMD_EDR_MAX_OFFSET] = buffer_mode->bt0_radio.radio_1 & 0x3F; /* edr_max_pwr */
cmd[SET_RADIO_CMD_EDR_MODE_OFFSET] = (buffer_mode->bt0_radio.radio_0 & 0xC0) >> 6; /* edr_pwr_mode */
BTMTK_INFO_RAW(cmd, SET_RADIO_CMD_LEN, "%s: Send", __func__);
ret = btmtk_main_send_cmd(buffer_mode->bdev,
cmd, SET_RADIO_CMD_LEN,
event, SET_RADIO_EVT_LEN,
0, 0, BTMTK_TX_CMD_FROM_DRV);
BTMTK_INFO("%s done", __func__);
return ret;
}
static int btmtk_buffer_mode_set_group_boundary(struct btmtk_buffer_mode_struct *buffer_mode)
{
u8 cmd[SET_GRP_CMD_LEN] = {0x01, 0xEA, 0xFC, 0x09, 0x02, 0x0B, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
u8 event[SET_GRP_EVT_LEN] = {0x04, 0x0E, 0x04, 0x01, 0xEA, 0xFC, 0x00};
int ret = 0;
memcpy(&cmd[SET_GRP_CMD_PAYLOAD_OFFSET], buffer_mode->bt0_ant0_grp_boundary, BUFFER_MODE_GROUP_LENGTH);
BTMTK_INFO_RAW(cmd, SET_GRP_CMD_LEN, "%s: Send", __func__);
ret = btmtk_main_send_cmd(buffer_mode->bdev,
cmd, SET_GRP_CMD_LEN,
event, SET_GRP_EVT_LEN,
0, 0, BTMTK_TX_CMD_FROM_DRV);
BTMTK_INFO("%s done", __func__);
return ret;
}
static int btmtk_buffer_mode_set_power_offset(struct btmtk_buffer_mode_struct *buffer_mode)
{
u8 cmd[SET_PWR_OFFSET_CMD_LEN] = {0x01, 0xEA, 0xFC, 0x0A,
0x02, 0x0A, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
u8 event[SET_PWR_OFFSET_EVT_LEN] = {0x04, 0x0E, 0x04, 0x01, 0xEA, 0xFC, 0x00};
int ret = 0;
memcpy(&cmd[SET_PWR_OFFSET_CMD_PAYLOAD_OFFSET], buffer_mode->bt0_ant0_pwr_offset, BUFFER_MODE_CAL_LENGTH);
BTMTK_INFO_RAW(cmd, SET_PWR_OFFSET_CMD_LEN, "%s: Send", __func__);
ret = btmtk_main_send_cmd(buffer_mode->bdev,
cmd, SET_PWR_OFFSET_CMD_LEN,
event, SET_PWR_OFFSET_EVT_LEN,
0, 0, BTMTK_TX_CMD_FROM_DRV);
BTMTK_INFO("%s done", __func__);
return ret;
}
int btmtk_buffer_mode_send(struct btmtk_buffer_mode_struct *buffer_mode)
{
int ret = 0;
if (buffer_mode == NULL) {
BTMTK_INFO("buffer_mode is NULL, not support");
return -EIO;
}
if (btmtk_buffer_mode_check_auto_mode(buffer_mode)) {
BTMTK_ERR("check auto mode failed");
return -EIO;
}
if (buffer_mode->efuse_mode == BIN_FILE_MODE) {
ret = btmtk_buffer_mode_set_addr(buffer_mode);
if (ret < 0)
BTMTK_ERR("set addr failed");
ret = btmtk_buffer_mode_set_radio(buffer_mode);
if (ret < 0)
BTMTK_ERR("set radio failed");
ret = btmtk_buffer_mode_set_group_boundary(buffer_mode);
if (ret < 0)
BTMTK_ERR("set group_boundary failed");
ret = btmtk_buffer_mode_set_power_offset(buffer_mode);
if (ret < 0)
BTMTK_ERR("set power_offset failed");
}
return 0;
}
void btmtk_buffer_mode_initialize(struct btmtk_dev *bdev, struct btmtk_buffer_mode_struct **buffer_mode)
{
int ret = 0;
u32 code_len = 0;
btmtk_buffer_mode.bdev = bdev;
ret = btmtk_load_code_from_setting_files(BUFFER_MODE_SWITCH_FILE, bdev->intf_dev, &code_len, bdev);
btmtk_buffer_mode.efuse_mode = btmtk_buffer_mode_parse_mode(bdev->setting_file, code_len);
if (btmtk_buffer_mode.efuse_mode == EFUSE_MODE)
return;
if (bdev->flavor)
(void)snprintf(btmtk_buffer_mode.file_name, MAX_BIN_FILE_NAME_LEN, "EEPROM_MT%04x_1a.bin",
bdev->chip_id & 0xffff);
else
(void)snprintf(btmtk_buffer_mode.file_name, MAX_BIN_FILE_NAME_LEN, "EEPROM_MT%04x_1.bin",
bdev->chip_id & 0xffff);
ret = btmtk_load_code_from_setting_files(btmtk_buffer_mode.file_name, bdev->intf_dev, &code_len, bdev);
if (ret < 0) {
BTMTK_ERR("set load %s failed", btmtk_buffer_mode.file_name);
return;
}
memcpy(btmtk_buffer_mode.bt0_mac, &bdev->setting_file[BT0_MAC_OFFSET],
BUFFER_MODE_MAC_LENGTH);
memcpy(btmtk_buffer_mode.bt1_mac, &bdev->setting_file[BT1_MAC_OFFSET],
BUFFER_MODE_MAC_LENGTH);
memcpy(&btmtk_buffer_mode.bt0_radio, &bdev->setting_file[BT0_RADIO_OFFSET],
BUFFER_MODE_RADIO_LENGTH);
memcpy(&btmtk_buffer_mode.bt1_radio, &bdev->setting_file[BT1_RADIO_OFFSET],
BUFFER_MODE_RADIO_LENGTH);
memcpy(btmtk_buffer_mode.bt0_ant0_grp_boundary, &bdev->setting_file[BT0_GROUP_ANT0_OFFSET],
BUFFER_MODE_GROUP_LENGTH);
memcpy(btmtk_buffer_mode.bt0_ant1_grp_boundary, &bdev->setting_file[BT0_GROUP_ANT1_OFFSET],
BUFFER_MODE_GROUP_LENGTH);
memcpy(btmtk_buffer_mode.bt1_ant0_grp_boundary, &bdev->setting_file[BT1_GROUP_ANT0_OFFSET],
BUFFER_MODE_GROUP_LENGTH);
memcpy(btmtk_buffer_mode.bt1_ant1_grp_boundary, &bdev->setting_file[BT1_GROUP_ANT1_OFFSET],
BUFFER_MODE_GROUP_LENGTH);
memcpy(btmtk_buffer_mode.bt0_ant0_pwr_offset, &bdev->setting_file[BT0_CAL_ANT0_OFFSET],
BUFFER_MODE_CAL_LENGTH);
memcpy(btmtk_buffer_mode.bt0_ant1_pwr_offset, &bdev->setting_file[BT0_CAL_ANT1_OFFSET],
BUFFER_MODE_CAL_LENGTH);
memcpy(btmtk_buffer_mode.bt1_ant0_pwr_offset, &bdev->setting_file[BT1_CAL_ANT0_OFFSET],
BUFFER_MODE_CAL_LENGTH);
memcpy(btmtk_buffer_mode.bt1_ant1_pwr_offset, &bdev->setting_file[BT1_CAL_ANT1_OFFSET],
BUFFER_MODE_CAL_LENGTH);
*buffer_mode = &btmtk_buffer_mode;
}

View File

@@ -0,0 +1,121 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2019 MediaTek Inc.
*/
#include "btmtk_main.h"
#include "btmtk_woble.h"
void btmtk_reset_waker(struct work_struct *work)
{
struct btmtk_dev *bdev = container_of(work, struct btmtk_dev, reset_waker);
struct btmtk_cif_state *cif_state = NULL;
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
int cif_event = 0, err = 0;
cif_event = HIF_EVENT_SUBSYS_RESET;
if (BTMTK_CIF_IS_NULL(bdev, cif_event)) {
/* Error */
BTMTK_WARN("%s priv setting is NULL", __func__);
goto Finish;
}
while (!bdev->bt_cfg.support_dongle_reset) {
BTMTK_ERR("%s chip_reset is not support", __func__);
msleep(2000);
}
cif_state = &bdev->cif_state[cif_event];
/* Set Entering state */
btmtk_set_chip_state((void *)bdev, cif_state->ops_enter);
BTMTK_INFO("%s: Receive a byte (0xFF)", __func__);
/* read interrupt EP15 CR */
bdev->subsys_reset = 1;
bdev->sco_num = 0;
if (bmain_info->whole_reset_flag == 0) {
if (bmain_info->hif_hook.subsys_reset)
err = bmain_info->hif_hook.subsys_reset(bdev);
else
BTMTK_INFO("%s: Not support subsys chip reset", __func__);
} else {
err = -1;
BTMTK_INFO("%s: whole_reset_flag is %d", __func__, bmain_info->whole_reset_flag);
}
if (err) {
/* L0.5 reset failed, do whole chip reset */
/* We will add support dongle reset flag, reading from bt.cfg */
bdev->subsys_reset = 0;
/* TODO: need to confirm with usb host when suspend fail, to do chip reset,
* because usb3.0 need to toggle reset pin after hub_event unfreeze,
* otherwise, it will not occur disconnect on Capy Platform. When Mstar
* chip has usb3.0 port, we will use Mstar platform to do comparison
* test, then found the final solution.
*/
/* msleep(2000); */
if (bmain_info->hif_hook.whole_reset)
bmain_info->hif_hook.whole_reset(bdev);
else
BTMTK_INFO("%s: Not support whole chip reset", __func__);
bmain_info->whole_reset_flag = 0;
goto Finish;
}
/* It's a test code for stress test (whole chip reset & L0.5 reset) */
#if 0
if (bdev->bt_cfg.support_dongle_reset == 0) {
err = btmtk_cif_subsys_reset(bdev);
if (err) {
/* L0.5 reset failed, do whole chip reset */
if (main_info.hif_hook->whole_reset)
main_info.hif_hook.whole_reset(bdev);
goto Finish;
}
} else {
/* L0.5 reset failed, do whole chip reset */
/* TODO: need to confirm with usb host when suspend fail, to do chip reset,
* because usb3.0 need to toggle reset pin after hub_event unfreeze,
* otherwise, it will not occur disconnect on Capy Platform. When Mstar
* chip has usb3.0 port, we will use Mstar platform to do comparison
* test, then found the final solution.
*/
/* msleep(2000); */
if (main_info.hif_hook->whole_reset)
main_info.hif_hook.whole_reset(bdev);
/* btmtk_send_hw_err_to_host(bdev); */
goto Finish;
}
#endif
bmain_info->reset_stack_flag = HW_ERR_CODE_CHIP_RESET;
bdev->subsys_reset = 0;
err = btmtk_cap_init(bdev);
if (err < 0) {
BTMTK_ERR("btmtk init failed!");
goto Finish;
}
err = btmtk_load_rom_patch(bdev);
if (err < 0) {
BTMTK_ERR("btmtk load rom patch failed!");
goto Finish;
}
btmtk_send_hw_err_to_host(bdev);
btmtk_woble_wake_unlock(bdev);
Finish:
bmain_info->hif_hook.chip_reset_notify(bdev);
/* Set End/Error state */
if (err < 0)
btmtk_set_chip_state((void *)bdev, cif_state->ops_error);
else
btmtk_set_chip_state((void *)bdev, cif_state->ops_end);
}

View File

@@ -0,0 +1,757 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2019 MediaTek Inc.
*/
#include "btmtk_main.h"
#include "btmtk_fw_log.h"
/*
* BT Logger Tool will turn on/off Firmware Picus log, and set 3 log levels (Low, SQC and Debug)
* For extention capability, driver does not check the value range.
*
* Combine log state and log level to below settings:
* - 0x00: OFF
* - 0x01: Low Power
* - 0x02: SQC
* - 0x03: Debug
*/
#if (FW_LOG_DEFAULT_ON == 0)
#define BT_FWLOG_DEFAULT_LEVEL 0x00
#else
#define BT_FWLOG_DEFAULT_LEVEL 0x02
#endif
/* CTD BT log function and log status */
static wait_queue_head_t BT_log_wq;
static struct semaphore ioctl_mtx;
static uint8_t g_bt_on = BT_FWLOG_OFF;
static uint8_t g_log_on = BT_FWLOG_OFF;
static uint8_t g_log_level = BT_FWLOG_DEFAULT_LEVEL;
static uint8_t g_log_current = BT_FWLOG_OFF;
/* For fwlog dev node setting */
static struct btmtk_fops_fwlog *g_fwlog;
const struct file_operations BT_fopsfwlog = {
.open = btmtk_fops_openfwlog,
.release = btmtk_fops_closefwlog,
.read = btmtk_fops_readfwlog,
.write = btmtk_fops_writefwlog,
.poll = btmtk_fops_pollfwlog,
.unlocked_ioctl = btmtk_fops_unlocked_ioctlfwlog,
.compat_ioctl = btmtk_fops_compat_ioctlfwlog
};
__weak int32_t btmtk_intcmd_wmt_utc_sync(void)
{
BTMTK_ERR("weak function %s not implement", __func__);
return -1;
}
__weak int32_t btmtk_intcmd_set_fw_log(uint8_t flag)
{
BTMTK_ERR("weak function %s not implement", __func__);
return -1;
}
void fw_log_bt_state_cb(uint8_t state)
{
uint8_t on_off;
on_off = (state == FUNC_ON) ? BT_FWLOG_ON : BT_FWLOG_OFF;
BTMTK_INFO("bt_on(0x%x) state(%d) on_off(0x%x)", g_bt_on, state, on_off);
if (g_bt_on != on_off) {
// changed
if (on_off == BT_FWLOG_OFF) { // should turn off
g_bt_on = BT_FWLOG_OFF;
BTMTK_INFO("BT func off, no need to send hci cmd");
} else {
g_bt_on = BT_FWLOG_ON;
if (g_log_current) {
btmtk_intcmd_set_fw_log(g_log_current);
btmtk_intcmd_wmt_utc_sync();
}
}
}
}
void fw_log_bt_event_cb(void)
{
BTMTK_DBG("fw_log_bt_event_cb");
wake_up_interruptible(&BT_log_wq);
}
int btmtk_fops_initfwlog(void)
{
static int BT_majorfwlog;
dev_t devIDfwlog = MKDEV(BT_majorfwlog, 0);
int ret = 0;
int cdevErr = 0;
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
BTMTK_INFO("%s: Start", __func__);
if (g_fwlog == NULL) {
g_fwlog = kzalloc(sizeof(*g_fwlog), GFP_KERNEL);
if (!g_fwlog) {
BTMTK_ERR("%s: alloc memory fail (g_data)", __func__);
return -1;
}
}
//if (is_mt66xx(g_sbdev->chip_id)) {
if (bmain_info->hif_hook.log_init) {
bmain_info->hif_hook.log_init();
bmain_info->hif_hook.log_register_cb(fw_log_bt_event_cb);
init_waitqueue_head(&BT_log_wq);
sema_init(&ioctl_mtx, 1);
} else {
spin_lock_init(&g_fwlog->fwlog_lock);
skb_queue_head_init(&g_fwlog->fwlog_queue);
init_waitqueue_head(&(g_fwlog->fw_log_inq));
}
ret = alloc_chrdev_region(&devIDfwlog, 0, 1, BT_FWLOG_DEV_NODE);
if (ret) {
BTMTK_ERR("%s: fail to allocate chrdev", __func__);
goto alloc_error;
}
BT_majorfwlog = MAJOR(devIDfwlog);
cdev_init(&g_fwlog->BT_cdevfwlog, &BT_fopsfwlog);
g_fwlog->BT_cdevfwlog.owner = THIS_MODULE;
cdevErr = cdev_add(&g_fwlog->BT_cdevfwlog, devIDfwlog, 1);
if (cdevErr)
goto cdv_error;
g_fwlog->pBTClass = class_create(THIS_MODULE, BT_FWLOG_DEV_NODE);
if (IS_ERR(g_fwlog->pBTClass)) {
BTMTK_ERR("%s: class create fail, error code(%ld)\n", __func__, PTR_ERR(g_fwlog->pBTClass));
goto create_node_error;
}
g_fwlog->pBTDevfwlog = device_create(g_fwlog->pBTClass, NULL, devIDfwlog, NULL,
BT_FWLOG_DEV_NODE);
if (IS_ERR(g_fwlog->pBTDevfwlog)) {
BTMTK_ERR("%s: device(stpbtfwlog) create fail, error code(%ld)", __func__,
PTR_ERR(g_fwlog->pBTDevfwlog));
goto create_node_error;
}
BTMTK_INFO("%s: BT_majorfwlog %d, devIDfwlog %d", __func__, BT_majorfwlog, devIDfwlog);
g_fwlog->g_devIDfwlog = devIDfwlog;
BTMTK_INFO("%s: End", __func__);
return 0;
create_node_error:
if (g_fwlog->pBTClass) {
class_destroy(g_fwlog->pBTClass);
g_fwlog->pBTClass = NULL;
}
cdv_error:
if (cdevErr == 0)
cdev_del(&g_fwlog->BT_cdevfwlog);
if (ret == 0)
unregister_chrdev_region(devIDfwlog, 1);
alloc_error:
kfree(g_fwlog);
g_fwlog = NULL;
return -1;
}
int btmtk_fops_exitfwlog(void)
{
dev_t devIDfwlog = g_fwlog->g_devIDfwlog;
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
BTMTK_INFO("%s: Start\n", __func__);
//if (is_mt66xx(g_sbdev->chip_id))
if (bmain_info->hif_hook.log_deinit)
bmain_info->hif_hook.log_deinit();
if (g_fwlog->pBTDevfwlog) {
device_destroy(g_fwlog->pBTClass, devIDfwlog);
g_fwlog->pBTDevfwlog = NULL;
}
if (g_fwlog->pBTClass) {
class_destroy(g_fwlog->pBTClass);
g_fwlog->pBTClass = NULL;
}
BTMTK_INFO("%s: pBTDevfwlog, pBTClass done\n", __func__);
cdev_del(&g_fwlog->BT_cdevfwlog);
unregister_chrdev_region(devIDfwlog, 1);
BTMTK_INFO("%s: BT_chrdevfwlog driver removed.\n", __func__);
kfree(g_fwlog);
return 0;
}
ssize_t btmtk_fops_readfwlog(struct file *filp, char __user *buf, size_t count, loff_t *f_pos)
{
int copyLen = 0;
ulong flags = 0;
struct sk_buff *skb = NULL;
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
//if (is_mt66xx(g_sbdev->chip_id)) {
if (bmain_info->hif_hook.log_read_to_user) {
copyLen = bmain_info->hif_hook.log_read_to_user(buf, count);
BTMTK_DBG("BT F/W log from Connsys, len %d", copyLen);
return copyLen;
}
/* picus read a queue, it may occur performace issue */
spin_lock_irqsave(&g_fwlog->fwlog_lock, flags);
if (skb_queue_len(&g_fwlog->fwlog_queue))
skb = skb_dequeue(&g_fwlog->fwlog_queue);
spin_unlock_irqrestore(&g_fwlog->fwlog_lock, flags);
if (skb == NULL)
return 0;
if (skb->len <= count) {
if (copy_to_user(buf, skb->data, skb->len))
BTMTK_ERR("%s: copy_to_user failed!", __func__);
copyLen = skb->len;
} else {
BTMTK_DBG("%s: socket buffer length error(count: %d, skb.len: %d)",
__func__, (int)count, skb->len);
}
kfree_skb(skb);
return copyLen;
}
ssize_t btmtk_fops_writefwlog(struct file *filp, const char __user *buf, size_t count, loff_t *f_pos)
{
int i = 0, len = 0, ret = -1;
int hci_idx = 0;
int vlen = 0, index = 3;
struct sk_buff *skb = NULL;
int state = BTMTK_STATE_INIT;
unsigned char fstate = BTMTK_FOPS_STATE_INIT;
u8 *i_fwlog_buf = NULL;
u8 *o_fwlog_buf = NULL;
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
struct btmtk_dev **pp_bdev = btmtk_get_pp_bdev();
/* only 7xxx will use writefwlog, 66xx not used */
/*if (is_mt66xx(bdev->chip_id)) {
* BTMTK_WARN("%s: not implement!", __func__);
* return 0;
* }
*/
i_fwlog_buf = kmalloc(HCI_MAX_COMMAND_BUF_SIZE, GFP_KERNEL);
if (!i_fwlog_buf) {
BTMTK_ERR("%s: alloc i_fwlog_buf failed", __func__);
ret = -ENOMEM;
goto exit;
}
o_fwlog_buf = kmalloc(HCI_MAX_COMMAND_SIZE, GFP_KERNEL);
if (!o_fwlog_buf) {
BTMTK_ERR("%s: alloc o_fwlog_buf failed", __func__);
ret = -ENOMEM;
goto exit;
}
if (count > HCI_MAX_COMMAND_BUF_SIZE) {
BTMTK_ERR("%s: your command is larger than maximum length, count = %zd",
__func__, count);
ret = -ENOMEM;
goto exit;
}
memset(i_fwlog_buf, 0, HCI_MAX_COMMAND_BUF_SIZE);
memset(o_fwlog_buf, 0, HCI_MAX_COMMAND_SIZE);
if (copy_from_user(i_fwlog_buf, buf, count) != 0) {
BTMTK_ERR("%s: Failed to copy data", __func__);
ret = -ENODATA;
goto exit;
}
/* For log level, EX: echo log_lvl=1 > /dev/stpbtfwlog */
if (strncmp(i_fwlog_buf, "log_lvl=", strlen("log_lvl=")) == 0) {
u8 val = *(i_fwlog_buf + strlen("log_lvl=")) - '0';
if (val > BTMTK_LOG_LVL_MAX || val <= 0) {
BTMTK_ERR("Got incorrect value for log level(%d)", val);
count = -EINVAL;
goto exit;
}
btmtk_log_lvl = val;
BTMTK_INFO("btmtk_log_lvl = %d", btmtk_log_lvl);
ret = count;
goto exit;
}
/* For bperf, EX: echo bperf=1 > /dev/stpbtfwlog */
if (strncmp(i_fwlog_buf, "bperf=", strlen("bperf=")) == 0) {
u8 val = *(i_fwlog_buf + strlen("bperf=")) - '0';
g_fwlog->btmtk_bluetooth_kpi = val;
BTMTK_INFO("%s: set bluetooth KPI feature(bperf) to %d", __func__, g_fwlog->btmtk_bluetooth_kpi);
ret = count;
goto exit;
}
if (strncmp(i_fwlog_buf, "whole chip reset", strlen("whole chip reset")) == 0) {
BTMTK_INFO("whole chip reset start");
bmain_info->whole_reset_flag = 1;
schedule_work(&pp_bdev[hci_idx]->reset_waker);
ret = count;
goto exit;
}
if (strncmp(i_fwlog_buf, "subsys chip reset", strlen("subsys chip reset")) == 0) {
BTMTK_INFO("subsys chip reset");
schedule_work(&pp_bdev[hci_idx]->reset_waker);
ret = count;
goto exit;
}
/* hci input command format : echo 01 be fc 01 05 > /dev/stpbtfwlog */
/* We take the data from index three to end. */
for (i = 0; i < count; i++) {
char *pos = i_fwlog_buf + i;
char temp_str[3] = {'\0'};
long res = 0;
if (*pos == ' ' || *pos == '\t' || *pos == '\r' || *pos == '\n') {
continue;
} else if (*pos == '0' && (*(pos + 1) == 'x' || *(pos + 1) == 'X')) {
i++;
continue;
} else if (!(*pos >= '0' && *pos <= '9') && !(*pos >= 'A' && *pos <= 'F')
&& !(*pos >= 'a' && *pos <= 'f')) {
BTMTK_ERR("%s: There is an invalid input(%c)", __func__, *pos);
ret = -EINVAL;
goto exit;
}
temp_str[0] = *pos;
temp_str[1] = *(pos + 1);
i++;
ret = kstrtol(temp_str, 16, &res);
if (ret == 0)
o_fwlog_buf[len++] = (u8)res;
else
BTMTK_ERR("%s: Convert %s failed(%d)", __func__, temp_str, ret);
}
if (o_fwlog_buf[0] != HCI_COMMAND_PKT && o_fwlog_buf[0] != FWLOG_TYPE) {
BTMTK_ERR("%s: Not support 0x%02X yet", __func__, o_fwlog_buf[0]);
ret = -EPROTONOSUPPORT;
goto exit;
}
/* check HCI command length */
if (len > HCI_MAX_COMMAND_SIZE) {
BTMTK_ERR("%s: command is larger than max buf size, length = %d", __func__, len);
ret = -ENOMEM;
goto exit;
}
skb = alloc_skb(count + BT_SKB_RESERVE, GFP_ATOMIC);
if (!skb) {
BTMTK_ERR("%s allocate skb failed!!", __func__);
ret = -ENOMEM;
goto exit;
}
/* send HCI command */
bt_cb(skb)->pkt_type = HCI_COMMAND_PKT;
/* format */
/* 0xF0 XX XX 00 01 AA 10 BB CC CC CC CC ... */
/* XX XX total length */
/* 00 : hci index setting type */
/* AA hci index to indicate which hci send following command*/
/* 10 : raw data type*/
/* BB command length */
/* CC command */
if (o_fwlog_buf[0] == FWLOG_TYPE) {
while (index < ((o_fwlog_buf[2] << 8) + o_fwlog_buf[1])) {
switch (o_fwlog_buf[index]) {
case FWLOG_HCI_IDX: /* hci index */
vlen = o_fwlog_buf[index + 1];
hci_idx = o_fwlog_buf[index + 2];
BTMTK_DBG("%s: send to hci%d", __func__, hci_idx);
index += (FWLOG_ATTR_TL_SIZE + vlen);
break;
case FWLOG_TX: /* tx raw data */
vlen = o_fwlog_buf[index + 1];
memcpy(skb->data, o_fwlog_buf + index + FWLOG_ATTR_TL_SIZE, vlen);
skb->len = vlen;
index = index + FWLOG_ATTR_TL_SIZE + vlen;
break;
default:
BTMTK_WARN("Invalid opcode");
ret = -1;
goto free_skb;
}
}
} else {
memcpy(skb->data, o_fwlog_buf, len);
skb->len = len;
pp_bdev[hci_idx]->opcode_usr[0] = o_fwlog_buf[1];
pp_bdev[hci_idx]->opcode_usr[1] = o_fwlog_buf[2];
}
/* won't send command if g_bdev not define */
if (pp_bdev[hci_idx]->hdev == NULL) {
BTMTK_DBG("pp_bdev[%d] not define", hci_idx);
ret = count;
goto free_skb;
}
state = btmtk_get_chip_state(pp_bdev[hci_idx]);
if (state != BTMTK_STATE_WORKING) {
BTMTK_WARN("%s: current is in suspend/resume/standby/dump/disconnect (%d).",
__func__, state);
ret = -EBADFD;
goto free_skb;
}
fstate = btmtk_fops_get_state(pp_bdev[hci_idx]);
if (fstate != BTMTK_FOPS_STATE_OPENED) {
BTMTK_WARN("%s: fops is not open yet(%d)!", __func__, fstate);
ret = -ENODEV;
goto free_skb;
}
if (pp_bdev[hci_idx]->power_state == BTMTK_DONGLE_STATE_POWER_OFF) {
BTMTK_WARN("%s: dongle state already power off, do not write", __func__);
ret = -EFAULT;
goto free_skb;
}
/* clean fwlog queue before enable picus log */
if (skb_queue_len(&g_fwlog->fwlog_queue) && skb->data[0] == 0x01
&& skb->data[1] == 0x5d && skb->data[2] == 0xfc && skb->data[4] == 0x00) {
skb_queue_purge(&g_fwlog->fwlog_queue);
BTMTK_INFO("clean fwlog_queue, skb_queue_len = %d", skb_queue_len(&g_fwlog->fwlog_queue));
}
btmtk_dispatch_fwlog_bluetooth_kpi(pp_bdev[hci_idx], skb->data, skb->len, KPI_WITHOUT_TYPE);
ret = bmain_info->hif_hook.send_cmd(pp_bdev[hci_idx], skb, 0, 0, (int)BTMTK_TX_PKT_FROM_HOST);
if (ret < 0) {
BTMTK_ERR("%s failed!!", __func__);
goto free_skb;
} else
BTMTK_INFO("%s: OK", __func__);
BTMTK_INFO("%s: Write end(len: %d)", __func__, len);
ret = count;
goto exit;
free_skb:
kfree_skb(skb);
skb = NULL;
exit:
kfree(i_fwlog_buf);
kfree(o_fwlog_buf);
return ret; /* If input is correct should return the same length */
}
int btmtk_fops_openfwlog(struct inode *inode, struct file *file)
{
BTMTK_INFO("%s: Start.", __func__);
return 0;
}
int btmtk_fops_closefwlog(struct inode *inode, struct file *file)
{
BTMTK_INFO("%s: Start.", __func__);
return 0;
}
long btmtk_fops_unlocked_ioctlfwlog(struct file *filp, unsigned int cmd, unsigned long arg)
{
long retval = 0;
uint8_t log_tmp = BT_FWLOG_OFF;
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
/* only 66xx will use ioctlfwlog, 76xx not used */
/* if (!is_mt66xx(g_sbdev->chip_id)) {
* BTMTK_WARN("%s: not implement!", __func__);
* return 0;
*}
*/
down(&ioctl_mtx);
if (bmain_info->hif_hook.log_hold_sem)
bmain_info->hif_hook.log_hold_sem();
switch (cmd) {
case BT_FWLOG_IOC_ON_OFF:
/* Connsyslogger daemon dynamically enable/disable Picus log */
BTMTK_INFO("[ON_OFF]arg(%lu) bt_on(0x%x) log_on(0x%x) level(0x%x) log_cur(0x%x)",
arg, g_bt_on, g_log_on, g_log_level, g_log_current);
log_tmp = (arg == 0) ? BT_FWLOG_OFF : BT_FWLOG_ON;
if (log_tmp != g_log_on) { // changed
g_log_on = log_tmp;
g_log_current = g_log_on & g_log_level;
if (g_bt_on) {
retval = btmtk_intcmd_set_fw_log(g_log_current);
btmtk_intcmd_wmt_utc_sync();
}
}
break;
case BT_FWLOG_IOC_SET_LEVEL:
/* Connsyslogger daemon dynamically set Picus log level */
BTMTK_INFO("[SET_LEVEL]arg(%lu) bt_on(0x%x) log_on(0x%x) level(0x%x) log_cur(0x%x)",
arg, g_bt_on, g_log_on, g_log_level, g_log_current);
log_tmp = (uint8_t)arg;
if (log_tmp != g_log_level) {
g_log_level = log_tmp;
g_log_current = g_log_on & g_log_level;
if (g_bt_on & g_log_on) {
// driver on and log on
retval = btmtk_intcmd_set_fw_log(g_log_current);
btmtk_intcmd_wmt_utc_sync();
}
}
break;
case BT_FWLOG_IOC_GET_LEVEL:
retval = g_log_level;
BTMTK_INFO("[GET_LEVEL]return %ld", retval);
break;
default:
BTMTK_ERR("Unknown cmd: 0x%08x", cmd);
retval = -EOPNOTSUPP;
break;
}
if (bmain_info->hif_hook.log_release_sem)
bmain_info->hif_hook.log_release_sem();
up(&ioctl_mtx);
return retval;
}
long btmtk_fops_compat_ioctlfwlog(struct file *filp, unsigned int cmd, unsigned long arg)
{
return btmtk_fops_unlocked_ioctlfwlog(filp, cmd, arg);
}
unsigned int btmtk_fops_pollfwlog(struct file *file, poll_table *wait)
{
unsigned int mask = 0;
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
//if (is_mt66xx(g_sbdev->chip_id)) {
if (bmain_info->hif_hook.log_get_buf_size) {
poll_wait(file, &BT_log_wq, wait);
if (bmain_info->hif_hook.log_get_buf_size() > 0)
mask = (POLLIN | POLLRDNORM);
} else {
poll_wait(file, &g_fwlog->fw_log_inq, wait);
if (skb_queue_len(&g_fwlog->fwlog_queue) > 0)
mask |= POLLIN | POLLRDNORM; /* readable */
}
return mask;
}
static void btmtk_fwdump_wake_lock(void)
{
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
BTMTK_INFO("%s: enter", __func__);
__pm_stay_awake(bmain_info->fwdump_ws);
BTMTK_INFO("%s: exit", __func__);
}
static void btmtk_fwdump_wake_unlock(void)
{
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
BTMTK_INFO("%s: enter", __func__);
__pm_relax(bmain_info->fwdump_ws);
BTMTK_INFO("%s: exit", __func__);
}
static int btmtk_skb_enq_fwlog(struct btmtk_dev *bdev, void *src, u32 len, u8 type, struct sk_buff_head *queue)
{
struct sk_buff *skb_tmp = NULL;
ulong flags = 0;
int retry = 10, index = FWLOG_TL_SIZE;
do {
skb_tmp = alloc_skb(len + FWLOG_PRSV_LEN, GFP_ATOMIC);
if (skb_tmp != NULL)
break;
else if (retry <= 0) {
pr_info("%s: alloc_skb return 0, error", __func__);
return -ENOMEM;
}
pr_info("%s: alloc_skb return 0, error, retry = %d", __func__, retry);
} while (retry-- > 0);
if (type) {
skb_tmp->data[0] = FWLOG_TYPE;
/* 01 for dongle index */
skb_tmp->data[index] = FWLOG_DONGLE_IDX;
skb_tmp->data[index + 1] = sizeof(bdev->dongle_index);
skb_tmp->data[index + 2] = bdev->dongle_index;
index += (FWLOG_ATTR_RX_LEN_LEN + FWLOG_ATTR_TYPE_LEN);
/* 11 for rx data*/
skb_tmp->data[index] = FWLOG_RX;
if (type == HCI_ACLDATA_PKT || type == HCI_EVENT_PKT || type == HCI_COMMAND_PKT) {
skb_tmp->data[index + 1] = len & 0x00FF;
skb_tmp->data[index + 2] = (len & 0xFF00) >> 8;
skb_tmp->data[index + 3] = type;
index += (HCI_TYPE_SIZE + FWLOG_ATTR_RX_LEN_LEN + FWLOG_ATTR_TYPE_LEN);
} else {
skb_tmp->data[index + 1] = len & 0x00FF;
skb_tmp->data[index + 2] = (len & 0xFF00) >> 8;
index += (FWLOG_ATTR_RX_LEN_LEN + FWLOG_ATTR_TYPE_LEN);
}
memcpy(&skb_tmp->data[index], src, len);
skb_tmp->data[1] = (len + index - FWLOG_TL_SIZE) & 0x00FF;
skb_tmp->data[2] = ((len + index - FWLOG_TL_SIZE) & 0xFF00) >> 8;
skb_tmp->len = len + index;
} else {
memcpy(skb_tmp->data, src, len);
skb_tmp->len = len;
}
spin_lock_irqsave(&g_fwlog->fwlog_lock, flags);
skb_queue_tail(queue, skb_tmp);
spin_unlock_irqrestore(&g_fwlog->fwlog_lock, flags);
return 0;
}
int btmtk_dispatch_fwlog_bluetooth_kpi(struct btmtk_dev *bdev, u8 *buf, int len, u8 type)
{
static u8 fwlog_blocking_warn;
int ret = 0;
if (g_fwlog->btmtk_bluetooth_kpi &&
skb_queue_len(&g_fwlog->fwlog_queue) < FWLOG_BLUETOOTH_KPI_QUEUE_COUNT) {
/* sent event to queue, picus tool will log it for bluetooth KPI feature */
if (btmtk_skb_enq_fwlog(bdev, buf, len, type, &g_fwlog->fwlog_queue) == 0) {
wake_up_interruptible(&g_fwlog->fw_log_inq);
fwlog_blocking_warn = 0;
}
} else {
if (fwlog_blocking_warn == 0) {
fwlog_blocking_warn = 1;
pr_info("btmtk_usb fwlog queue size is full(bluetooth_kpi)");
}
}
return ret;
}
int btmtk_dispatch_fwlog(struct btmtk_dev *bdev, struct sk_buff *skb)
{
static u8 fwlog_picus_blocking_warn;
static u8 fwlog_fwdump_blocking_warn;
int state = BTMTK_STATE_INIT;
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
if ((bt_cb(skb)->pkt_type == HCI_ACLDATA_PKT) &&
skb->data[0] == 0x6f &&
skb->data[1] == 0xfc) {
static int dump_data_counter;
static int dump_data_length;
state = btmtk_get_chip_state(bdev);
if (state != BTMTK_STATE_FW_DUMP) {
BTMTK_INFO("%s: FW dump begin", __func__);
btmtk_hci_snoop_print_to_log();
/* Print too much log, it may cause kernel panic. */
dump_data_counter = 0;
dump_data_length = 0;
btmtk_set_chip_state(bdev, BTMTK_STATE_FW_DUMP);
btmtk_fwdump_wake_lock();
}
dump_data_counter++;
dump_data_length += skb->len;
/* coredump */
/* print dump data to console */
if (dump_data_counter % 1000 == 0) {
BTMTK_INFO("%s: FW dump on-going, total_packet = %d, total_length = %d",
__func__, dump_data_counter, dump_data_length);
}
/* print dump data to console */
if (dump_data_counter < 20)
BTMTK_INFO("%s: FW dump data (%d): %s",
__func__, dump_data_counter, &skb->data[4]);
/* In the new generation, we will check the keyword of coredump (; coredump end)
* Such as : 79xx
*/
if (skb->data[skb->len - 4] == 'e' &&
skb->data[skb->len - 3] == 'n' &&
skb->data[skb->len - 2] == 'd') {
/* This is the latest coredump packet. */
BTMTK_INFO("%s: FW dump end, dump_data_counter = %d", __func__, dump_data_counter);
/* TODO: Chip reset*/
bmain_info->reset_stack_flag = HW_ERR_CODE_CORE_DUMP;
btmtk_fwdump_wake_unlock();
}
if (skb_queue_len(&g_fwlog->fwlog_queue) < FWLOG_ASSERT_QUEUE_COUNT) {
/* sent picus data to queue, picus tool will log it */
if (btmtk_skb_enq_fwlog(bdev, skb->data, skb->len, 0, &g_fwlog->fwlog_queue) == 0) {
wake_up_interruptible(&g_fwlog->fw_log_inq);
fwlog_fwdump_blocking_warn = 0;
}
} else {
if (fwlog_fwdump_blocking_warn == 0) {
fwlog_fwdump_blocking_warn = 1;
pr_info("btmtk fwlog queue size is full(coredump)");
}
}
if (!bdev->bt_cfg.support_picus_to_host)
return 1;
} else if ((bt_cb(skb)->pkt_type == HCI_ACLDATA_PKT) &&
(skb->data[0] == 0xff || skb->data[0] == 0xfe) &&
skb->data[1] == 0x05 &&
!bdev->bt_cfg.support_picus_to_host) {
/* picus or syslog */
if (skb_queue_len(&g_fwlog->fwlog_queue) < FWLOG_QUEUE_COUNT) {
if (btmtk_skb_enq_fwlog(bdev, skb->data, skb->len,
FWLOG_TYPE, &g_fwlog->fwlog_queue) == 0) {
wake_up_interruptible(&g_fwlog->fw_log_inq);
fwlog_picus_blocking_warn = 0;
}
} else {
if (fwlog_picus_blocking_warn == 0) {
fwlog_picus_blocking_warn = 1;
pr_info("btmtk fwlog queue size is full(picus)");
}
}
return 1;
} else if ((bt_cb(skb)->pkt_type == HCI_EVENT_PKT) &&
skb->data[0] == 0x0E &&
bdev->opcode_usr[0] == skb->data[3] &&
bdev->opcode_usr[1] == skb->data[4]) {
BTMTK_INFO_RAW(skb->data, skb->len, "%s: Discard event from user hci command - ", __func__);
bdev->opcode_usr[0] = 0;
bdev->opcode_usr[1] = 0;
return 1;
}
return 0;
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,983 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2019 MediaTek Inc.
*/
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/input.h>
#include <linux/pm_wakeup.h>
#include <linux/interrupt.h>
#include "btmtk_woble.h"
static int is_support_unify_woble(struct btmtk_dev *bdev)
{
if (bdev->bt_cfg.support_unify_woble) {
if (is_mt7922(bdev->chip_id) || is_mt7961(bdev->chip_id) || is_mt7663(bdev->chip_id))
return 1;
else
return 0;
} else {
return 0;
}
}
static void btmtk_woble_wake_lock(struct btmtk_dev *bdev)
{
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
if (bdev->bt_cfg.support_woble_wakelock) {
BTMTK_INFO("%s: enter", __func__);
__pm_stay_awake(bmain_info->woble_ws);
BTMTK_INFO("%s: exit", __func__);
}
}
void btmtk_woble_wake_unlock(struct btmtk_dev *bdev)
{
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
if (bdev->bt_cfg.support_woble_wakelock) {
BTMTK_INFO("%s: enter", __func__);
__pm_relax(bmain_info->woble_ws);
BTMTK_INFO("%s: exit", __func__);
}
}
static int btmtk_send_woble_apcf_reserved(struct btmtk_dev *bdev)
{
u8 reserve_apcf_cmd[RES_APCF_CMD_LEN] = { 0x01, 0xC9, 0xFC, 0x05, 0x01, 0x30, 0x02, 0x61, 0x02 };
u8 reserve_apcf_event[RES_APCF_EVT_LEN] = { 0x04, 0xE6, 0x02, 0x08, 0x11 };
int ret = -1;
if (bdev == NULL) {
BTMTK_ERR("%s: Incorrect bdev", __func__);
return ret;
}
if (is_mt7922(bdev->chip_id) || is_mt7961(bdev->chip_id))
ret = btmtk_main_send_cmd(bdev, reserve_apcf_cmd, RES_APCF_CMD_LEN,
reserve_apcf_event, RES_APCF_EVT_LEN, 0, 0,
BTMTK_TX_PKT_FROM_HOST);
else
BTMTK_WARN("%s: not support for 0x%x", __func__, bdev->chip_id);
BTMTK_INFO("%s: ret %d", __func__, ret);
return ret;
}
static int btmtk_send_woble_read_BDADDR_cmd(struct btmtk_dev *bdev)
{
u8 cmd[READ_ADDRESS_CMD_LEN] = { 0x01, 0x09, 0x10, 0x00 };
u8 event[READ_ADDRESS_EVT_HDR_LEN] = { 0x04, 0x0E, 0x0A, 0x01, 0x09, 0x10, 0x00, /* AA, BB, CC, DD, EE, FF */ };
int i;
int ret = -1;
BTMTK_INFO("%s: begin", __func__);
if (bdev == NULL || bdev->io_buf == NULL) {
BTMTK_ERR("%s: Incorrect bdev", __func__);
return ret;
}
for (i = 0; i < BD_ADDRESS_SIZE; i++) {
if (bdev->bdaddr[i] != 0) {
ret = 0;
goto done;
}
}
ret = btmtk_main_send_cmd(bdev,
cmd, READ_ADDRESS_CMD_LEN,
event, READ_ADDRESS_EVT_HDR_LEN,
0, 0, BTMTK_TX_PKT_FROM_HOST);
/*BD address will get in btmtk_rx_work*/
if (ret < 0)
BTMTK_ERR("%s: failed(%d)", __func__, ret);
done:
BTMTK_INFO("%s, end, ret = %d", __func__, ret);
return ret;
}
static int btmtk_send_unify_woble_suspend_default_cmd(struct btmtk_dev *bdev)
{
u8 cmd[WOBLE_ENABLE_DEFAULT_CMD_LEN] = { 0x01, 0xC9, 0xFC, 0x24, 0x01, 0x20, 0x02, 0x00, 0x01,
0x02, 0x01, 0x00, 0x05, 0x10, 0x00, 0x00, 0x40, 0x06,
0x02, 0x40, 0x0A, 0x02, 0x41, 0x0F, 0x05, 0x24, 0x20,
0x04, 0x32, 0x00, 0x09, 0x26, 0xC0, 0x12, 0x00, 0x00,
0x12, 0x00, 0x00, 0x00};
u8 event[WOBLE_ENABLE_DEFAULT_EVT_LEN] = { 0x04, 0xE6, 0x02, 0x08, 0x00 };
int ret = 0; /* if successful, 0 */
BTMTK_INFO("%s: begin", __func__);
ret = btmtk_main_send_cmd(bdev,
cmd, WOBLE_ENABLE_DEFAULT_CMD_LEN,
event, WOBLE_ENABLE_DEFAULT_EVT_LEN,
0, 0, BTMTK_TX_PKT_FROM_HOST);
if (ret < 0)
BTMTK_ERR("%s: failed(%d)", __func__, ret);
BTMTK_INFO("%s: end. ret = %d", __func__, ret);
return ret;
}
static int btmtk_send_unify_woble_resume_default_cmd(struct btmtk_dev *bdev)
{
u8 cmd[WOBLE_DISABLE_DEFAULT_CMD_LEN] = { 0x01, 0xC9, 0xFC, 0x05, 0x01, 0x21, 0x02, 0x00, 0x00 };
u8 event[WOBLE_DISABLE_DEFAULT_EVT_LEN] = { 0x04, 0xE6, 0x02, 0x08, 0x01 };
int ret = 0; /* if successful, 0 */
BTMTK_INFO("%s: begin", __func__);
ret = btmtk_main_send_cmd(bdev,
cmd, WOBLE_DISABLE_DEFAULT_CMD_LEN,
event, WOBLE_DISABLE_DEFAULT_EVT_LEN,
0, 0, BTMTK_TX_PKT_FROM_HOST);
if (ret < 0)
BTMTK_ERR("%s: failed(%d)", __func__, ret);
BTMTK_INFO("%s: end. ret = %d", __func__, ret);
return ret;
}
static int btmtk_send_woble_suspend_cmd(struct btmtk_dev *bdev)
{
/* radio off cmd with wobx_mode_disable, used when unify woble off */
u8 radio_off_cmd[RADIO_OFF_CMD_LEN] = { 0x01, 0xC9, 0xFC, 0x05, 0x01, 0x20, 0x02, 0x00, 0x00 };
u8 event[RADIO_OFF_EVT_LEN] = { 0x04, 0xE6, 0x02, 0x08, 0x00 };
int ret = 0; /* if successful, 0 */
BTMTK_INFO("%s: not support woble, send radio off cmd", __func__);
ret = btmtk_main_send_cmd(bdev,
radio_off_cmd, RADIO_OFF_CMD_LEN,
event, RADIO_OFF_EVT_LEN,
0, 0, BTMTK_TX_PKT_FROM_HOST);
if (ret < 0)
BTMTK_ERR("%s: failed(%d)", __func__, ret);
return ret;
}
static int btmtk_send_woble_resume_cmd(struct btmtk_dev *bdev)
{
/* radio on cmd with wobx_mode_disable, used when unify woble off */
u8 radio_on_cmd[RADIO_ON_CMD_LEN] = { 0x01, 0xC9, 0xFC, 0x05, 0x01, 0x21, 0x02, 0x00, 0x00 };
u8 event[RADIO_ON_EVT_LEN] = { 0x04, 0xE6, 0x02, 0x08, 0x01 };
int ret = 0; /* if successful, 0 */
BTMTK_INFO("%s: begin", __func__);
ret = btmtk_main_send_cmd(bdev,
radio_on_cmd, RADIO_ON_CMD_LEN,
event, RADIO_ON_EVT_LEN,
0, 0, BTMTK_TX_PKT_FROM_HOST);
if (ret < 0)
BTMTK_ERR("%s: failed(%d)", __func__, ret);
return ret;
}
static int btmtk_set_Woble_APCF_filter_parameter(struct btmtk_dev *bdev)
{
u8 cmd[APCF_FILTER_CMD_LEN] = { 0x01, 0x57, 0xFD, 0x0A,
0x01, 0x00, 0x0A, 0x20, 0x00, 0x20, 0x00, 0x01, 0x80, 0x00 };
u8 event[APCF_FILTER_EVT_HDR_LEN] = { 0x04, 0x0E, 0x07,
0x01, 0x57, 0xFD, 0x00, 0x01/*, 00, 63*/ };
int ret = -1;
BTMTK_INFO("%s: begin", __func__);
ret = btmtk_main_send_cmd(bdev, cmd, APCF_FILTER_CMD_LEN,
event, APCF_FILTER_EVT_HDR_LEN, 0, 0, BTMTK_TX_PKT_FROM_HOST);
if (ret < 0)
BTMTK_ERR("%s: end ret %d", __func__, ret);
else
ret = 0;
BTMTK_INFO("%s: end ret=%d", __func__, ret);
return ret;
}
/**
* Set APCF manufacturer data and filter parameter
*/
static int btmtk_set_Woble_APCF(struct btmtk_woble *bt_woble)
{
u8 manufactur_data[APCF_CMD_LEN] = { 0x01, 0x57, 0xFD, 0x27, 0x06, 0x00, 0x0A,
0x46, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x43, 0x52, 0x4B, 0x54, 0x4D,
0xFF, 0xFF, 0x00, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
0x00, 0x00, 0x00, 0x00, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF };
u8 event[APCF_EVT_HDR_LEN] = { 0x04, 0x0E, 0x07, 0x01, 0x57, 0xFD, 0x00, /* 0x06 00 63 */ };
int ret = -1;
u8 i = 0;
struct btmtk_dev *bdev = bt_woble->bdev;
BTMTK_INFO("%s: woble_setting_apcf[0].length %d",
__func__, bt_woble->woble_setting_apcf[0].length);
/* start to send apcf cmd from woble setting file */
if (bt_woble->woble_setting_apcf[0].length) {
for (i = 0; i < WOBLE_SETTING_COUNT; i++) {
if (!bt_woble->woble_setting_apcf[i].length)
continue;
BTMTK_INFO("%s: apcf_fill_mac[%d].content[0] = 0x%02x", __func__, i,
bt_woble->woble_setting_apcf_fill_mac[i].content[0]);
BTMTK_INFO("%s: apcf_fill_mac_location[%d].length = %d", __func__, i,
bt_woble->woble_setting_apcf_fill_mac_location[i].length);
if ((bt_woble->woble_setting_apcf_fill_mac[i].content[0] == 1) &&
bt_woble->woble_setting_apcf_fill_mac_location[i].length) {
/* need add BD addr to apcf cmd */
memcpy(bt_woble->woble_setting_apcf[i].content +
(*bt_woble->woble_setting_apcf_fill_mac_location[i].content + 1),
bdev->bdaddr, BD_ADDRESS_SIZE);
BTMTK_INFO("%s: apcf[%d], add local BDADDR to location %d", __func__, i,
(*bt_woble->woble_setting_apcf_fill_mac_location[i].content));
}
BTMTK_INFO_RAW(bt_woble->woble_setting_apcf[i].content, bt_woble->woble_setting_apcf[i].length,
"Send woble_setting_apcf[%d] ", i);
ret = btmtk_main_send_cmd(bdev, bt_woble->woble_setting_apcf[i].content,
bt_woble->woble_setting_apcf[i].length, event, APCF_EVT_HDR_LEN, 0, 0,
BTMTK_TX_PKT_FROM_HOST);
if (ret < 0) {
BTMTK_ERR("%s: manufactur_data error ret %d", __func__, ret);
return ret;
}
}
} else { /* use default */
BTMTK_INFO("%s: use default manufactur data", __func__);
memcpy(manufactur_data + 10, bdev->bdaddr, BD_ADDRESS_SIZE);
ret = btmtk_main_send_cmd(bdev, manufactur_data, APCF_CMD_LEN,
event, APCF_EVT_HDR_LEN, 0, 0, BTMTK_TX_PKT_FROM_HOST);
if (ret < 0) {
BTMTK_ERR("%s: manufactur_data error ret %d", __func__, ret);
return ret;
}
ret = btmtk_set_Woble_APCF_filter_parameter(bdev);
}
BTMTK_INFO("%s: end ret=%d", __func__, ret);
return 0;
}
static int btmtk_set_Woble_Radio_Off(struct btmtk_woble *bt_woble)
{
int ret = -1;
int length = 0;
char *radio_off = NULL;
struct btmtk_dev *bdev = bt_woble->bdev;
BTMTK_INFO("%s: woble_setting_radio_off.length %d", __func__,
bt_woble->woble_setting_radio_off.length);
if (bt_woble->woble_setting_radio_off.length) {
/* start to send radio off cmd from woble setting file */
length = bt_woble->woble_setting_radio_off.length +
bt_woble->woble_setting_wakeup_type.length;
radio_off = kzalloc(length, GFP_KERNEL);
if (!radio_off) {
BTMTK_ERR("%s: alloc memory fail (radio_off)",
__func__);
ret = -ENOMEM;
goto Finish;
}
memcpy(radio_off,
bt_woble->woble_setting_radio_off.content,
bt_woble->woble_setting_radio_off.length);
if (bt_woble->woble_setting_wakeup_type.length) {
memcpy(radio_off + bt_woble->woble_setting_radio_off.length,
bt_woble->woble_setting_wakeup_type.content,
bt_woble->woble_setting_wakeup_type.length);
radio_off[3] += bt_woble->woble_setting_wakeup_type.length;
}
BTMTK_INFO_RAW(radio_off, length, "Send radio off");
ret = btmtk_main_send_cmd(bdev, radio_off, length,
bt_woble->woble_setting_radio_off_comp_event.content,
bt_woble->woble_setting_radio_off_comp_event.length, 0, 0,
BTMTK_TX_PKT_FROM_HOST);
kfree(radio_off);
radio_off = NULL;
} else { /* use default */
BTMTK_INFO("%s: use default radio off cmd", __func__);
ret = btmtk_send_unify_woble_suspend_default_cmd(bdev);
}
Finish:
BTMTK_INFO("%s, end ret=%d", __func__, ret);
return ret;
}
static int btmtk_set_Woble_Radio_On(struct btmtk_woble *bt_woble)
{
int ret = -1;
struct btmtk_dev *bdev = bt_woble->bdev;
BTMTK_INFO("%s: woble_setting_radio_on.length %d", __func__,
bt_woble->woble_setting_radio_on.length);
if (bt_woble->woble_setting_radio_on.length) {
/* start to send radio on cmd from woble setting file */
BTMTK_INFO_RAW(bt_woble->woble_setting_radio_on.content,
bt_woble->woble_setting_radio_on.length, "send radio on");
ret = btmtk_main_send_cmd(bdev, bt_woble->woble_setting_radio_on.content,
bt_woble->woble_setting_radio_on.length,
bt_woble->woble_setting_radio_on_comp_event.content,
bt_woble->woble_setting_radio_on_comp_event.length, 0, 0,
BTMTK_TX_PKT_FROM_HOST);
} else { /* use default */
BTMTK_WARN("%s: use default radio on cmd", __func__);
ret = btmtk_send_unify_woble_resume_default_cmd(bdev);
}
BTMTK_INFO("%s, end ret=%d", __func__, ret);
return ret;
}
static int btmtk_del_Woble_APCF_index(struct btmtk_dev *bdev)
{
u8 cmd[APCF_DELETE_CMD_LEN] = { 0x01, 0x57, 0xFD, 0x03, 0x01, 0x01, 0x0A };
u8 event[APCF_DELETE_EVT_HDR_LEN] = { 0x04, 0x0e, 0x07, 0x01, 0x57, 0xfd, 0x00, 0x01, /* 00, 63 */ };
int ret = -1;
BTMTK_INFO("%s, enter", __func__);
ret = btmtk_main_send_cmd(bdev,
cmd, APCF_DELETE_CMD_LEN,
event, APCF_DELETE_EVT_HDR_LEN,
0, 0, BTMTK_TX_PKT_FROM_HOST);
if (ret < 0)
BTMTK_ERR("%s: got error %d", __func__, ret);
BTMTK_INFO("%s, end", __func__);
return ret;
}
static int btmtk_set_Woble_APCF_Resume(struct btmtk_woble *bt_woble)
{
u8 event[APCF_RESUME_EVT_HDR_LEN] = { 0x04, 0x0e, 0x07, 0x01, 0x57, 0xfd, 0x00 };
u8 i = 0;
int ret = -1;
struct btmtk_dev *bdev = bt_woble->bdev;
BTMTK_INFO("%s, enter, bt_woble->woble_setting_apcf_resume[0].length= %d",
__func__, bt_woble->woble_setting_apcf_resume[0].length);
if (bt_woble->woble_setting_apcf_resume[0].length) {
BTMTK_INFO("%s: handle leave woble apcf from file", __func__);
for (i = 0; i < WOBLE_SETTING_COUNT; i++) {
if (!bt_woble->woble_setting_apcf_resume[i].length)
continue;
BTMTK_INFO_RAW(bt_woble->woble_setting_apcf_resume[i].content,
bt_woble->woble_setting_apcf_resume[i].length,
"%s: send apcf resume %d:", __func__, i);
ret = btmtk_main_send_cmd(bdev,
bt_woble->woble_setting_apcf_resume[i].content,
bt_woble->woble_setting_apcf_resume[i].length,
event, APCF_RESUME_EVT_HDR_LEN,
0, 0, BTMTK_TX_PKT_FROM_HOST);
if (ret < 0) {
BTMTK_ERR("%s: Send apcf resume fail %d", __func__, ret);
return ret;
}
}
} else { /* use default */
BTMTK_WARN("%s: use default apcf resume cmd", __func__);
ret = btmtk_del_Woble_APCF_index(bdev);
if (ret < 0)
BTMTK_ERR("%s: btmtk_del_Woble_APCF_index return fail %d", __func__, ret);
}
BTMTK_INFO("%s, end", __func__);
return ret;
}
static int btmtk_load_woble_setting(char *bin_name,
struct device *dev, u32 *code_len, struct btmtk_woble *bt_woble)
{
int err;
struct btmtk_dev *bdev = bt_woble->bdev;
*code_len = 0;
err = btmtk_load_code_from_setting_files(bin_name, dev, code_len, bdev);
if (err) {
BTMTK_ERR("woble_setting btmtk_load_code_from_setting_files failed!!");
goto LOAD_END;
}
err = btmtk_load_fw_cfg_setting("APCF",
bt_woble->woble_setting_apcf, WOBLE_SETTING_COUNT, bdev->setting_file, FW_CFG_INX_LEN_2);
if (err)
goto LOAD_END;
err = btmtk_load_fw_cfg_setting("APCF_ADD_MAC",
bt_woble->woble_setting_apcf_fill_mac, WOBLE_SETTING_COUNT,
bdev->setting_file, FW_CFG_INX_LEN_2);
if (err)
goto LOAD_END;
err = btmtk_load_fw_cfg_setting("APCF_ADD_MAC_LOCATION",
bt_woble->woble_setting_apcf_fill_mac_location, WOBLE_SETTING_COUNT,
bdev->setting_file, FW_CFG_INX_LEN_2);
if (err)
goto LOAD_END;
err = btmtk_load_fw_cfg_setting("RADIOOFF", &bt_woble->woble_setting_radio_off, 1,
bdev->setting_file, FW_CFG_INX_LEN_2);
if (err)
goto LOAD_END;
switch (bdev->bt_cfg.unify_woble_type) {
case 0:
err = btmtk_load_fw_cfg_setting("WAKEUP_TYPE_LEGACY", &bt_woble->woble_setting_wakeup_type, 1,
bdev->setting_file, FW_CFG_INX_LEN_2);
break;
case 1:
err = btmtk_load_fw_cfg_setting("WAKEUP_TYPE_WAVEFORM", &bt_woble->woble_setting_wakeup_type, 1,
bdev->setting_file, FW_CFG_INX_LEN_2);
break;
case 2:
err = btmtk_load_fw_cfg_setting("WAKEUP_TYPE_IR", &bt_woble->woble_setting_wakeup_type, 1,
bdev->setting_file, FW_CFG_INX_LEN_2);
break;
default:
BTMTK_WARN("%s: unify_woble_type unknown(%d)", __func__, bdev->bt_cfg.unify_woble_type);
}
if (err)
BTMTK_WARN("%s: Parse unify_woble_type(%d) failed", __func__, bdev->bt_cfg.unify_woble_type);
err = btmtk_load_fw_cfg_setting("RADIOOFF_STATUS_EVENT",
&bt_woble->woble_setting_radio_off_status_event, 1, bdev->setting_file, FW_CFG_INX_LEN_2);
if (err)
goto LOAD_END;
err = btmtk_load_fw_cfg_setting("RADIOOFF_COMPLETE_EVENT",
&bt_woble->woble_setting_radio_off_comp_event, 1, bdev->setting_file, FW_CFG_INX_LEN_2);
if (err)
goto LOAD_END;
err = btmtk_load_fw_cfg_setting("RADIOON",
&bt_woble->woble_setting_radio_on, 1, bdev->setting_file, FW_CFG_INX_LEN_2);
if (err)
goto LOAD_END;
err = btmtk_load_fw_cfg_setting("RADIOON_STATUS_EVENT",
&bt_woble->woble_setting_radio_on_status_event, 1, bdev->setting_file, FW_CFG_INX_LEN_2);
if (err)
goto LOAD_END;
err = btmtk_load_fw_cfg_setting("RADIOON_COMPLETE_EVENT",
&bt_woble->woble_setting_radio_on_comp_event, 1, bdev->setting_file, FW_CFG_INX_LEN_2);
if (err)
goto LOAD_END;
err = btmtk_load_fw_cfg_setting("APCF_RESUME",
bt_woble->woble_setting_apcf_resume, WOBLE_SETTING_COUNT, bdev->setting_file, FW_CFG_INX_LEN_2);
LOAD_END:
/* release setting file memory */
if (bdev) {
kfree(bdev->setting_file);
bdev->setting_file = NULL;
}
if (err)
BTMTK_ERR("%s: error return %d", __func__, err);
return err;
}
static void btmtk_check_wobx_debug_log(struct btmtk_dev *bdev)
{
/* 0xFF, 0xFF, 0xFF, 0xFF is log level */
u8 cmd[CHECK_WOBX_DEBUG_CMD_LEN] = { 0X01, 0xCE, 0xFC, 0x04, 0xFF, 0xFF, 0xFF, 0xFF };
u8 event[CHECK_WOBX_DEBUG_EVT_HDR_LEN] = { 0x04, 0xE8 };
int ret = -1;
BTMTK_INFO("%s: begin", __func__);
ret = btmtk_main_send_cmd(bdev,
cmd, CHECK_WOBX_DEBUG_CMD_LEN,
event, CHECK_WOBX_DEBUG_EVT_HDR_LEN,
0, 0, BTMTK_TX_PKT_FROM_HOST);
if (ret < 0)
BTMTK_ERR("%s: failed(%d)", __func__, ret);
/* Driver just print event to kernel log in rx_work,
* Please reference wiki to know what it is.
*/
}
static int btmtk_handle_leaving_WoBLE_state(struct btmtk_woble *bt_woble)
{
int ret = -1;
unsigned char fstate = BTMTK_FOPS_STATE_INIT;
struct btmtk_dev *bdev = bt_woble->bdev;
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
BTMTK_INFO("%s: begin", __func__);
fstate = btmtk_fops_get_state(bdev);
if (!bdev->bt_cfg.support_woble_for_bt_disable) {
if (fstate != BTMTK_FOPS_STATE_OPENED) {
BTMTK_WARN("%s: fops is not opened, return", __func__);
return 0;
}
}
if (fstate != BTMTK_FOPS_STATE_OPENED) {
BTMTK_WARN("%s: fops is not open yet(%d), need to start traffic before leaving woble",
__func__, fstate);
/* start traffic to recv event*/
ret = bmain_info->hif_hook.open(bdev->hdev);
if (ret < 0) {
BTMTK_ERR("%s, cif_open failed", __func__);
goto Finish;
}
}
if (is_support_unify_woble(bdev)) {
ret = btmtk_set_Woble_Radio_On(bt_woble);
if (ret < 0)
goto Finish;
ret = btmtk_set_Woble_APCF_Resume(bt_woble);
if (ret < 0)
goto Finish;
} else {
/* radio on cmd with wobx_mode_disable, used when unify woble off */
ret = btmtk_send_woble_resume_cmd(bdev);
}
Finish:
if (ret < 0) {
BTMTK_INFO("%s: woble_resume_fail!!!", __func__);
} else {
/* It's wobx debug log method. */
btmtk_check_wobx_debug_log(bdev);
if (fstate != BTMTK_FOPS_STATE_OPENED) {
ret = btmtk_send_deinit_cmds(bdev);
if (ret < 0) {
BTMTK_ERR("%s, btmtk_send_deinit_cmds failed", __func__);
goto exit;
}
BTMTK_WARN("%s: fops is not open(%d), need to stop traffic after leaving woble",
__func__, fstate);
/* stop traffic to stop recv data from fw*/
ret = bmain_info->hif_hook.close(bdev->hdev);
if (ret < 0) {
BTMTK_ERR("%s, cif_close failed", __func__);
goto exit;
}
} else
bdev->power_state = BTMTK_DONGLE_STATE_POWER_ON;
BTMTK_INFO("%s: success", __func__);
}
exit:
BTMTK_INFO("%s: end", __func__);
return ret;
}
static int btmtk_handle_entering_WoBLE_state(struct btmtk_woble *bt_woble)
{
int ret = -1;
unsigned char fstate = BTMTK_FOPS_STATE_INIT;
int state = BTMTK_STATE_INIT;
struct btmtk_dev *bdev = bt_woble->bdev;
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
BTMTK_INFO("%s: begin", __func__);
fstate = btmtk_fops_get_state(bdev);
if (!bdev->bt_cfg.support_woble_for_bt_disable) {
if (fstate != BTMTK_FOPS_STATE_OPENED) {
BTMTK_WARN("%s: fops is not open yet(%d)!, return", __func__, fstate);
return 0;
}
}
state = btmtk_get_chip_state(bdev);
if (state == BTMTK_STATE_FW_DUMP) {
BTMTK_WARN("%s: FW dumping ongoing, don't send any cmd to FW!!!", __func__);
goto Finish;
}
if (bdev->chip_reset || bdev->subsys_reset) {
BTMTK_ERR("%s chip_reset is %d, subsys_reset is %d", __func__,
bdev->chip_reset, bdev->subsys_reset);
goto Finish;
}
/* Power on first if state is power off */
ret = btmtk_reset_power_on(bdev);
if (ret < 0) {
BTMTK_ERR("%s: reset power_on fail return", __func__);
goto Finish;
}
if (fstate != BTMTK_FOPS_STATE_OPENED) {
BTMTK_WARN("%s: fops is not open yet(%d), need to start traffic before enter woble",
__func__, fstate);
/* start traffic to recv event*/
ret = bmain_info->hif_hook.open(bdev->hdev);
if (ret < 0) {
BTMTK_ERR("%s, cif_open failed", __func__);
goto Finish;
}
}
if (is_support_unify_woble(bdev)) {
do {
typedef ssize_t (*func) (u16 u16Key, const char *buf, size_t size);
char *func_name = "MDrv_PM_Write_Key";
func pFunc = NULL;
ssize_t sret = 0;
u8 buf = 0;
pFunc = (func) btmtk_kallsyms_lookup_name(func_name);
if (pFunc && bdev->bt_cfg.unify_woble_type == 1) {
buf = 1;
sret = pFunc(PM_KEY_BTW, &buf, sizeof(u8));
BTMTK_INFO("%s: Invoke %s, buf = %d, sret = %zd", __func__,
func_name, buf, sret);
} else {
BTMTK_WARN("%s: No Exported Func Found [%s]", __func__, func_name);
}
} while (0);
ret = btmtk_send_woble_apcf_reserved(bdev);
if (ret < 0)
goto STOP_TRAFFIC;
ret = btmtk_send_woble_read_BDADDR_cmd(bdev);
if (ret < 0)
goto STOP_TRAFFIC;
ret = btmtk_set_Woble_APCF(bt_woble);
if (ret < 0)
goto STOP_TRAFFIC;
ret = btmtk_set_Woble_Radio_Off(bt_woble);
if (ret < 0)
goto STOP_TRAFFIC;
} else {
/* radio off cmd with wobx_mode_disable, used when unify woble off */
ret = btmtk_send_woble_suspend_cmd(bdev);
}
STOP_TRAFFIC:
if (fstate != BTMTK_FOPS_STATE_OPENED) {
BTMTK_WARN("%s: fops is not open(%d), need to stop traffic after enter woble",
__func__, fstate);
/* stop traffic to stop recv data from fw*/
ret = bmain_info->hif_hook.close(bdev->hdev);
if (ret < 0) {
BTMTK_ERR("%s, cif_close failed", __func__);
goto Finish;
}
}
Finish:
if (ret) {
bdev->power_state = BTMTK_DONGLE_STATE_ERROR;
btmtk_woble_wake_lock(bdev);
}
BTMTK_INFO("%s: end ret = %d, power_state =%d", __func__, ret, bdev->power_state);
return ret;
}
int btmtk_woble_suspend(struct btmtk_woble *bt_woble)
{
int ret = 0;
unsigned char fstate = BTMTK_FOPS_STATE_INIT;
struct btmtk_dev *bdev = bt_woble->bdev;
BTMTK_INFO("%s: enter", __func__);
fstate = btmtk_fops_get_state(bdev);
if (!is_support_unify_woble(bdev) && (fstate != BTMTK_FOPS_STATE_OPENED)) {
BTMTK_WARN("%s: when not support woble, in bt off state, do nothing!", __func__);
goto exit;
}
ret = btmtk_handle_entering_WoBLE_state(bt_woble);
if (ret)
BTMTK_ERR("%s: btmtk_handle_entering_WoBLE_state return fail %d", __func__, ret);
if (bdev->bt_cfg.support_woble_by_eint) {
if (bt_woble->wobt_irq != 0 && atomic_read(&(bt_woble->irq_enable_count)) == 0) {
BTMTK_INFO("enable BT IRQ:%d", bt_woble->wobt_irq);
irq_set_irq_wake(bt_woble->wobt_irq, 1);
enable_irq(bt_woble->wobt_irq);
atomic_inc(&(bt_woble->irq_enable_count));
} else
BTMTK_INFO("irq_enable count:%d", atomic_read(&(bt_woble->irq_enable_count)));
}
exit:
BTMTK_INFO("%s: end", __func__);
return ret;
}
int btmtk_woble_resume(struct btmtk_woble *bt_woble)
{
int ret = -1;
unsigned char fstate = BTMTK_FOPS_STATE_INIT;
struct btmtk_dev *bdev = bt_woble->bdev;
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
BTMTK_INFO("%s: enter", __func__);
fstate = btmtk_fops_get_state(bdev);
if (!is_support_unify_woble(bdev) && (fstate != BTMTK_FOPS_STATE_OPENED)) {
BTMTK_WARN("%s: when not support woble, in bt off state, do nothing!", __func__);
goto exit;
}
if (bdev->power_state == BTMTK_DONGLE_STATE_ERROR) {
BTMTK_INFO("%s: In BTMTK_DONGLE_STATE_ERROR(Could suspend caused), do assert", __func__);
btmtk_send_assert_cmd(bdev);
ret = -EBADFD;
goto exit;
}
if (bdev->bt_cfg.support_woble_by_eint) {
if (bt_woble->wobt_irq != 0 && atomic_read(&(bt_woble->irq_enable_count)) == 1) {
BTMTK_INFO("disable BT IRQ:%d", bt_woble->wobt_irq);
atomic_dec(&(bt_woble->irq_enable_count));
disable_irq_nosync(bt_woble->wobt_irq);
} else
BTMTK_INFO("irq_enable count:%d", atomic_read(&(bt_woble->irq_enable_count)));
}
ret = btmtk_handle_leaving_WoBLE_state(bt_woble);
if (ret < 0) {
BTMTK_ERR("%s: btmtk_handle_leaving_WoBLE_state return fail %d", __func__, ret);
/* avoid rtc to to suspend again, do FW dump first */
btmtk_woble_wake_lock(bdev);
btmtk_send_assert_cmd(bdev);
goto exit;
}
if (bdev->bt_cfg.reset_stack_after_woble
&& bmain_info->reset_stack_flag == HW_ERR_NONE
&& fstate == BTMTK_FOPS_STATE_OPENED)
bmain_info->reset_stack_flag = HW_ERR_CODE_RESET_STACK_AFTER_WOBLE;
btmtk_send_hw_err_to_host(bdev);
BTMTK_INFO("%s: end(%d), reset_stack_flag = %d, fstate = %d", __func__, ret,
bmain_info->reset_stack_flag, fstate);
exit:
BTMTK_INFO("%s: end", __func__);
return ret;
}
static irqreturn_t btmtk_woble_isr(int irq, struct btmtk_woble *bt_woble)
{
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
BTMTK_DBG("%s begin", __func__);
disable_irq_nosync(bt_woble->wobt_irq);
atomic_dec(&(bt_woble->irq_enable_count));
BTMTK_INFO("disable BT IRQ, call wake lock");
__pm_wakeup_event(bmain_info->eint_ws, WAIT_POWERKEY_TIMEOUT);
input_report_key(bt_woble->WoBLEInputDev, KEY_WAKEUP, 1);
input_sync(bt_woble->WoBLEInputDev);
input_report_key(bt_woble->WoBLEInputDev, KEY_WAKEUP, 0);
input_sync(bt_woble->WoBLEInputDev);
BTMTK_DBG("%s end", __func__);
return IRQ_HANDLED;
}
static int btmtk_RegisterBTIrq(struct btmtk_woble *bt_woble)
{
struct device_node *eint_node = NULL;
int interrupts[2];
BTMTK_DBG("%s begin", __func__);
eint_node = of_find_compatible_node(NULL, NULL, "mediatek,woble_eint");
if (eint_node) {
BTMTK_INFO("Get woble_eint compatible node");
bt_woble->wobt_irq = irq_of_parse_and_map(eint_node, 0);
BTMTK_INFO("woble_irq number:%d", bt_woble->wobt_irq);
if (bt_woble->wobt_irq) {
of_property_read_u32_array(eint_node, "interrupts",
interrupts, ARRAY_SIZE(interrupts));
bt_woble->wobt_irqlevel = interrupts[1];
if (request_irq(bt_woble->wobt_irq, (void *)btmtk_woble_isr,
bt_woble->wobt_irqlevel, "woble-eint", bt_woble->bdev))
BTMTK_INFO("WOBTIRQ LINE NOT AVAILABLE!!");
else {
BTMTK_INFO("disable BT IRQ");
disable_irq_nosync(bt_woble->wobt_irq);
}
} else
BTMTK_INFO("can't find woble_eint irq");
} else {
bt_woble->wobt_irq = 0;
BTMTK_INFO("can't find woble_eint compatible node");
}
BTMTK_DBG("%s end", __func__);
return 0;
}
static int btmtk_woble_input_init(struct btmtk_woble *bt_woble)
{
int ret = 0;
bt_woble->WoBLEInputDev = input_allocate_device();
if (!bt_woble->WoBLEInputDev || IS_ERR(bt_woble->WoBLEInputDev)) {
BTMTK_ERR("input_allocate_device error");
return -ENOMEM;
}
bt_woble->WoBLEInputDev->name = "WOBLE_INPUT_DEVICE";
bt_woble->WoBLEInputDev->id.bustype = BUS_HOST;
bt_woble->WoBLEInputDev->id.vendor = 0x0002;
bt_woble->WoBLEInputDev->id.product = 0x0002;
bt_woble->WoBLEInputDev->id.version = 0x0002;
__set_bit(EV_KEY, bt_woble->WoBLEInputDev->evbit);
__set_bit(KEY_WAKEUP, bt_woble->WoBLEInputDev->keybit);
ret = input_register_device(bt_woble->WoBLEInputDev);
if (ret < 0) {
input_free_device(bt_woble->WoBLEInputDev);
BTMTK_ERR("input_register_device %d", ret);
return ret;
}
return ret;
}
static void btmtk_woble_input_deinit(struct btmtk_woble *bt_woble)
{
if (bt_woble->WoBLEInputDev) {
input_unregister_device(bt_woble->WoBLEInputDev);
input_free_device(bt_woble->WoBLEInputDev);
bt_woble->WoBLEInputDev = NULL;
}
}
static void btmtk_free_woble_setting_file(struct btmtk_woble *bt_woble)
{
btmtk_free_fw_cfg_struct(bt_woble->woble_setting_apcf, WOBLE_SETTING_COUNT);
btmtk_free_fw_cfg_struct(bt_woble->woble_setting_apcf_fill_mac, WOBLE_SETTING_COUNT);
btmtk_free_fw_cfg_struct(bt_woble->woble_setting_apcf_fill_mac_location, WOBLE_SETTING_COUNT);
btmtk_free_fw_cfg_struct(bt_woble->woble_setting_apcf_resume, WOBLE_SETTING_COUNT);
btmtk_free_fw_cfg_struct(&bt_woble->woble_setting_radio_off, 1);
btmtk_free_fw_cfg_struct(&bt_woble->woble_setting_radio_off_status_event, 1);
btmtk_free_fw_cfg_struct(&bt_woble->woble_setting_radio_off_comp_event, 1);
btmtk_free_fw_cfg_struct(&bt_woble->woble_setting_radio_on, 1);
btmtk_free_fw_cfg_struct(&bt_woble->woble_setting_radio_on_status_event, 1);
btmtk_free_fw_cfg_struct(&bt_woble->woble_setting_radio_on_comp_event, 1);
btmtk_free_fw_cfg_struct(&bt_woble->woble_setting_wakeup_type, 1);
bt_woble->woble_setting_len = 0;
kfree(bt_woble->woble_setting_file_name);
bt_woble->woble_setting_file_name = NULL;
}
int btmtk_woble_initialize(struct btmtk_dev *bdev, struct btmtk_woble *bt_woble)
{
int err = 0;
struct btmtk_main_info *bmain_info = btmtk_get_main_info();
bt_woble->bdev = bdev;
/* Need to add Woble flow */
if (is_support_unify_woble(bdev)) {
if (bt_woble->woble_setting_file_name == NULL) {
bt_woble->woble_setting_file_name = kzalloc(MAX_BIN_FILE_NAME_LEN, GFP_KERNEL);
if (!bt_woble->woble_setting_file_name) {
BTMTK_ERR("%s: alloc memory fail (bt_woble->woble_setting_file_name)", __func__);
err = -1;
goto end;
}
}
if (is_mt7663(bdev->chip_id))
memcpy(bt_woble->woble_setting_file_name, WOBLE_SETTING_FILE_NAME_7663,
sizeof(WOBLE_SETTING_FILE_NAME_7663));
if (is_mt7922(bdev->chip_id) || is_mt7961(bdev->chip_id))
memcpy(bt_woble->woble_setting_file_name, WOBLE_SETTING_FILE_NAME_7961,
sizeof(WOBLE_SETTING_FILE_NAME_7961));
BTMTK_INFO("%s: woble setting file name is %s", __func__, bt_woble->woble_setting_file_name);
btmtk_load_woble_setting(bt_woble->woble_setting_file_name,
bdev->intf_dev,
&bt_woble->woble_setting_len,
bt_woble);
/* if reset_stack is true, when chip reset is done, we need to power on chip to do
* reset stack
*/
if (bmain_info->reset_stack_flag) {
err = btmtk_reset_power_on(bdev);
if (err < 0) {
BTMTK_ERR("reset power on failed!");
goto err;
}
}
}
if (bdev->bt_cfg.support_woble_by_eint) {
btmtk_woble_input_init(bt_woble);
btmtk_RegisterBTIrq(bt_woble);
}
return 0;
err:
btmtk_free_woble_setting_file(bt_woble);
end:
return err;
}
void btmtk_woble_uninitialize(struct btmtk_woble *bt_woble)
{
struct btmtk_dev *bdev = bt_woble->bdev;
if (bdev == NULL) {
BTMTK_ERR("%s: bdev == NULL", __func__);
return;
}
BTMTK_INFO("%s begin", __func__);
if (bdev->bt_cfg.support_woble_by_eint) {
if (bt_woble->wobt_irq != 0 && atomic_read(&(bt_woble->irq_enable_count)) == 1) {
BTMTK_INFO("disable BT IRQ:%d", bt_woble->wobt_irq);
atomic_dec(&(bt_woble->irq_enable_count));
disable_irq_nosync(bt_woble->wobt_irq);
} else
BTMTK_INFO("irq_enable count:%d", atomic_read(&(bt_woble->irq_enable_count)));
free_irq(bt_woble->wobt_irq, bdev);
btmtk_woble_input_deinit(bt_woble);
}
btmtk_free_woble_setting_file(bt_woble);
bt_woble->bdev = NULL;
}

View File

@@ -0,0 +1,30 @@
# bt.cfg,now just parse '#' for annotation
SUPPORT_UNIFY_WOBLE 1
# this item is valid only when SUPPORT_UNIFY_WOBLE enabled. 0: LEGACY, LOW or HIGH. 1: Waveform. 2: IR
UNIFY_WOBLE_TYPE 1
SUPPORT_LEGACY_WOBLE 0
SUPPORT_WOBLE_BY_EINT 0
BT_DONGLE_RESET_GPIO_PIN 220
SAVE_FW_DUMP_IN_KERNEL 0
# this item is valid only when support save fw_dump in kernel
SYS_LOG_FILE_NAME /sdcard/bt_sys_log
# this item is valid only when support save fw_dump in kernel
FW_DUMP_FILE_NAME /sdcard/bt_fw_dump
SUPPORT_DONGLE_RESET 1
SUPPORT_FULL_FW_DUMP 0
SUPPORT_WOBLE_WAKELOCK 1
SUPPORT_WOBLE_FOR_BT_DISABLE 1
RESET_STACK_AFTER_WOBLE 1
SUPPORT_BT_SINGLE_SKU 1
SUPPORT_PICUS_TO_HOST 0
SUPPORT_AUTO_PICUS 0
PICUS_ENABLE_COMMAND: 0x01, 0x5D, 0xFC, 0x04, 0x00, 0x00, 0x02, 0x02,
PICUS_FILTER_COMMAND: 0x5F, 0xFC, 0x2E, 0x50, 0x01, 0x0A, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0xE0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0x01, 0x01, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
WMT_CMD000: 0x6F, 0xFC, 0x06, 0x01, 0x06, 0x02, 0x00, 0x03, 0x01,
# phase1 means this wmt cmd should be send to FW after download patch or after BT power on
# this wmt cmd used to enable bus vio
PHASE1_WMT_CMD000: 0x6F, 0xFC, 0x08, 0x01, 0x02, 0x04, 0x00, 0x1D, 0x01, 0x00, 0x00,
#vendor cmd need to send to FW after BT power on
VENDOR_CMD000: 0x96, 0xFD, 0x18, 0x03, 0x03, 0x06, 0x0E, 0x03, 0x03, 0x07, 0x0E, 0x03, 0x03, 0x08, 0x0E, 0x03, 0x03, 0x0B, 0x11, 0x03, 0x03, 0x0D, 0x0C, 0x03, 0x03, 0x13, 0x10,

View File

@@ -0,0 +1,28 @@
# File path depends on kernel request_firmware API, EX: /etc/firmware or /lib/firmware
# Format:
# 1. ":" is necessary between name & command
# 2. HEX need prefix '0x'
# 3. Each HEX end need ','
# Only support 10 groups setting, add scan interval and window
Turnkey:
APCF00:0x57, 0xFD, 0x27, 0x06, 0x00, 0x0A, 0x46, 0x00, 0x00, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x00, 0x43, 0x52, 0x4B, 0x54, 0x4D, 0xFF, 0xFF, 0x00, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x00, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
APCF_ADD_MAC00:0x01,
APCF_ADD_MAC_LOCATION00:0x09,
APCF01:0x57, 0xFD, 0x0A, 0x01, 0x00, 0x0A, 0x20, 0x00, 0x20, 0x00, 0x01, 0x80, 0x00,
APCF_ADD_MAC01:0x00,
APCF_ADD_MAC_LOCATION01:0x00,
RADIOOFF00:0xC9, 0xFC, 0x1E, 0x01, 0x20, 0x02, 0x00, 0x01, 0x02, 0x01, 0x00, 0x02, 0x40, 0x0A, 0x02, 0x41, 0x0F, 0x05, 0x24, 0x20, 0x04, 0x32, 0x00, 0x09, 0x26, 0xC0, 0x12, 0x00, 0x00, 0x12, 0x00, 0x00, 0x00,
WAKEUP_TYPE_LEGACY00:0x05, 0x10, 0x00, 0x00, 0x6B, 0x00, 0x04, 0x42, 0x09, 0x01, 0x02, 0x0A, 0x48, 0x01, 0xFF, 0xFF, 0x00, 0x00, 0x10, 0x00, 0x00, 0x80,
WAKEUP_TYPE_WAVEFORM00:0x05, 0x10, 0x00, 0x00, 0x20, 0x00, 0x04, 0x42, 0x09, 0x01, 0x02, 0xA2, 0x48, 0x01, 0xFF, 0xFF, 0x00, 0x00, 0x10, 0x00, 0x00, 0xFE, 0xA5, 0xA5, 0x00, 0x00, 0x10, 0x00, 0x4A, 0xFF, 0x5A, 0x5A, 0x00, 0x00, 0x10, 0x00, 0xB4, 0xFE, 0x96, 0x96, 0x00, 0x00, 0x10, 0x00, 0x2C, 0xFF, 0x69, 0x69, 0x00, 0x00, 0x10, 0x00, 0xD2, 0xFE, 0x78, 0x78, 0x00, 0x00, 0x10, 0x00, 0xF0, 0xFE, 0x11, 0x11, 0x00, 0x00, 0x10, 0x00, 0x22, 0xFE, 0x22, 0x22, 0x00, 0x00, 0x10, 0x00, 0x44, 0xFE, 0x33, 0x33, 0x00, 0x00, 0x10, 0x00, 0x66, 0xFE, 0x44, 0x44, 0x00, 0x00, 0x10, 0x00, 0x88, 0xFE, 0x55, 0x55, 0x00, 0x00, 0x10, 0x00, 0xAA, 0xFE, 0x66, 0x66, 0x00, 0x00, 0x10, 0x00, 0xCC, 0xFE, 0x77, 0x77, 0x00, 0x00, 0x10, 0x00, 0xEE, 0xFE, 0x88, 0x88, 0x00, 0x00, 0x10, 0x00, 0x10, 0xFF, 0x99, 0x99, 0x00, 0x00, 0x10, 0x00, 0x32, 0xFF, 0xAA, 0xAA, 0x00, 0x00, 0x10, 0x00, 0x54, 0xFF, 0xBB, 0xBB, 0x00, 0x00, 0x10, 0x00, 0x76, 0xFF, 0xCC, 0xCC, 0x00, 0x00, 0x10, 0x00, 0x98, 0xFF, 0xDD, 0xDD, 0x00, 0x00, 0x10, 0x00, 0xBA, 0xFF, 0xEE, 0xEE, 0x00, 0x00, 0x10, 0x00, 0xDC, 0xFF,
WAKEUP_TYPE_IR00:
RADIOOFF_STATUS_EVENT00:0x0f, 0x04, 0x00, 0x01, 0xC9, 0xFC,
RADIOOFF_COMPLETE_EVENT00:0xe6, 0x02, 0x08, 0x00,
RADIOON00:0xC9, 0xFC, 0x05, 0x01, 0x21, 0x02, 0x00, 0x00,
RADIOON_STATUS_EVENT00:0x0F, 0x04, 0x00, 0x01, 0xC9, 0xFC,
RADIOON_COMPLETE_EVENT00:0xe6, 0x02, 0x08, 0x01,
APCF_RESUME00:0x57, 0xFD, 0x03, 0x01, 0x01, 0x0A,

Some files were not shown because too many files have changed in this diff Show More