clk: msm: Add snapshot of clock framework files
This is snapshot of the clock framework files as of msm-4.9
'commit cc7a1542d987 ("msm: ipa: Fix assignment warning with clang").
Below is the brief description of the additional changes made:
1. Add COMMON_CLK_MSM config flag for conditional compilation for
some common files used between COMMON_CLK_MSM and COMMON_CLK_QCOM
clock framework files.
2. Add reset controller framework files for BCR operation.
3. Add conditional compilation support for FTRACE clock functions
to maintain compatibility for clock framework based on
COMMON_CLK_MSM and COMMON_CLK_QCOM.
4. Add files for GDSC operation.
5. Add BCR reset maps.
6. Resolve compilation issue for qti-quin-gvm.
Some PLL HWs require an additional delay for the PLL lock detect
to stabilize after being brought out of reset and SW to poll for
lock detect status. Add delay of 50uSec before polling lock_det
bit by introducing new pll ops.
Also if PLL fails to lock, record additional PLL debug information
in the kernel log before panic().
'commit 90cb5ecd7cfd ("clk: msm: Add delay of 50uSec before polling
lock_detect status")'.
1:1 is the MN divider preference for DSI PCLK for the regular
24 bpp use-case for display as per hardware recommendation.
Update the divider array to give first priority to 1:1
divider combination.
'commit a270c07a1e21 ("clk: msm: update the fractional divider
array for DSI PCLK")'.
For some PLLs, there could be need to configure the calibration
L value for auto calibration which PLL would use whenever it will
come out of reset. Add support for the same by writing into
USER_CTL_HI register.
'commit 05bd8759e347 ("clk: msm: Add support to configure
calibration L value")'.
Change-Id: I4260a9807e5e1b116db8f43fb9cfbbb55a5a8d67
Signed-off-by: Taniya Das <tdas@codeaurora.org>
Signed-off-by: Suresh Kumar Allam <allamsuresh@codeaurora.org>
This commit is contained in:
@@ -0,0 +1,42 @@
|
||||
* Qualcomm Technologies, Inc. Application CPU clock driver
|
||||
|
||||
clock-a7 is the driver for the Root Clock Generator (rcg) hw which controls
|
||||
the cpu rate. RCGs support selecting one of several clock inputs, as well as
|
||||
a configurable divider. This hw is different than normal rcgs in that it may
|
||||
optionally have a register which encodes the maximum rate supported by hw.
|
||||
|
||||
Required properties:
|
||||
- compatible: "qcom,clock-a7-mdm9607"
|
||||
- reg: pairs of physical address and region size
|
||||
- reg-names: "rcg-base" is expected
|
||||
- clock-names: list of names of clock inputs
|
||||
- qcom,speedX-bin-vZ:
|
||||
A table of CPU frequency (Hz) to regulator voltage (uV) mapping.
|
||||
Format: <freq uV>
|
||||
This represents the max frequency possible for each possible
|
||||
power configuration for a CPU that's binned as speed bin X,
|
||||
speed bin revision Z. Speed bin values can be between [0-7]
|
||||
and the version can be between [0-3].
|
||||
|
||||
- cpu-vdd-supply: regulator phandle for cpu power domain.
|
||||
|
||||
Optional properties:
|
||||
- reg-names: "efuse", "efuse1"
|
||||
- qcom,safe-freq: Frequency in HZ
|
||||
When switching rates from A to B, the mux div clock will
|
||||
instead switch from A -> safe_freq -> B.
|
||||
- qcom,enable-opp: This will allow to register the cpu clock with OPP
|
||||
framework.
|
||||
|
||||
Example:
|
||||
qcom,acpuclk@f9011050 {
|
||||
compatible = "qcom,clock-a7-8226";
|
||||
reg = <0xf9011050 0x8>;
|
||||
reg-names = "rcg_base";
|
||||
cpu-vdd-supply = <&apc_vreg_corner>;
|
||||
|
||||
clock-names = "clk-4", "clk-5";
|
||||
qcom,speed0-bin-v0 =
|
||||
<384000000 1150000>,
|
||||
<600000000 1200000>;
|
||||
};
|
||||
@@ -0,0 +1,69 @@
|
||||
Qualcomm Technologies, Inc. MSM Clock controller
|
||||
|
||||
Qualcomm Technologies, Inc. MSM Clock controller devices contain PLLs, root clock generators
|
||||
and other clocking hardware blocks that provide stable, low power clocking to
|
||||
hardware blocks on SOCs. The clock controller device node lists the
|
||||
power supplies needed to be scaled using the vdd_*-supply property.
|
||||
|
||||
Minor differences between hardware revisions are handled in code by re-using
|
||||
the compatible string to indicate the revision.
|
||||
|
||||
Required properties:
|
||||
- compatible: Must be one of following,
|
||||
"qcom,gcc-mdm9607"
|
||||
"qcom,cc-debug-mdm9607"
|
||||
|
||||
- reg: Pairs of physical base addresses and region sizes of
|
||||
memory mapped registers.
|
||||
- reg-names: Names of the bases for the above registers. Currently,
|
||||
there is one expected base: "cc_base". Optional
|
||||
reg-names are "apcs_base", "meas", "mmss_base",
|
||||
"lpass_base", "apcs_c0_base", "apcs_c1_base",
|
||||
"apcs_cci_base", "efuse".
|
||||
|
||||
Optional properties:
|
||||
- vdd_dig-supply: The digital logic rail supply.
|
||||
- <pll>_dig-supply: Some PLLs might have separate digital supply on some
|
||||
targets. These properties will be provided on those
|
||||
targets for specific PLLs.
|
||||
- <pll>_analog-supply: Some PLLs might have separate analog supply on some
|
||||
targets. These properties will be provided on those
|
||||
targets for specific PLLs.
|
||||
- vdd_gpu_mx-supply: MX rail supply for the GPU core.
|
||||
- #clock_cells: If this device will also be providing controllable
|
||||
clocks, the clock_cells property needs to be specified.
|
||||
This will allow the common clock device tree framework
|
||||
to recognize _this_ device node as a clock provider.
|
||||
- qcom,<clk>-corner-<vers>: List of frequency voltage pairs that the clock can
|
||||
operate at. Drivers can use the OPP library API to
|
||||
operate on the list of OPPs registered using these
|
||||
values.
|
||||
- qcom,<clk>-speedbinX: A table of frequency (Hz) to voltage (corner) mapping
|
||||
that represents the max frequency possible for each
|
||||
supported voltage level for the clock.
|
||||
'X' is the speed bin into which the device falls into -
|
||||
a bin will have unique frequency-voltage relationships.
|
||||
The value 'X' is read from efuse registers, and the right
|
||||
table is picked from multiple possible tables.
|
||||
- qcom,<clock-name>-opp-handle: phandle references to the devices for which OPP
|
||||
table is filled with the clock frequency and voltage
|
||||
values.
|
||||
- qcom,<clock-name>-opp-store-vcorner: phandle references to the devices for
|
||||
which OPP table is filled with the clock frequency
|
||||
and voltage corner/level.
|
||||
|
||||
Example:
|
||||
clock_rpm: qcom,rpmcc@fc4000000 {
|
||||
compatible = "qcom,rpmcc-8974";
|
||||
reg = <0xfc400000 0x4000>;
|
||||
reg-names = "cc_base";
|
||||
#clock-cells = <1>;
|
||||
};
|
||||
|
||||
clock_gcc: qcom,gcc@fc400000 {
|
||||
compatible = "qcom,gcc-8974";
|
||||
reg = <0xfc400000 0x4000>;
|
||||
reg-names = "cc_base";
|
||||
vdd_dig-supply = <&pm8841_s2_corner>;
|
||||
#clock-cells = <1>;
|
||||
};
|
||||
@@ -0,0 +1,22 @@
|
||||
Qualcomm Technologies MSM Clock Controller
|
||||
|
||||
Required properties :
|
||||
- compatible : shall contain "qcom,msm-clock-controller"
|
||||
- reg : shall contain base register location and length
|
||||
- reg-names: names of registers listed in the same order as in
|
||||
the reg property.
|
||||
- #clock-cells : shall contain 1
|
||||
- #reset-cells : shall contain 1
|
||||
|
||||
Optional properties :
|
||||
- vdd_<rail>-supply: The logic rail supply.
|
||||
|
||||
Example:
|
||||
clock_gcc: qcom,gcc@1800000 {
|
||||
compatible = "qcom,msm-clock-controller";
|
||||
reg = <0x1800000 0x80000>;
|
||||
reg-names = "cc-base";
|
||||
#clock-cells = <1>;
|
||||
clock-names = "a7_debug_clk";
|
||||
clocks = <&clock_a7pll clk_a7_debug_mux>;
|
||||
};
|
||||
@@ -57,7 +57,7 @@ config ARM64
|
||||
select ARM_PSCI_FW
|
||||
select BUILDTIME_EXTABLE_SORT
|
||||
select CLONE_BACKWARDS
|
||||
select COMMON_CLK
|
||||
select COMMON_CLK if !ARCH_QCOM
|
||||
select CPU_PM if (SUSPEND || CPU_IDLE)
|
||||
select DCACHE_WORD_ACCESS
|
||||
select EDAC_SUPPORT
|
||||
|
||||
@@ -138,12 +138,17 @@ config ARCH_QCOM
|
||||
select MFD_CORE
|
||||
select SND_SOC_COMPRESS
|
||||
select SND_HWDEP
|
||||
select CLKDEV_LOOKUP
|
||||
select HAVE_CLK
|
||||
select HAVE_CLK_PREPARE
|
||||
select PM_OPP
|
||||
help
|
||||
This enables support for the ARMv8 based Qualcomm chipsets.
|
||||
|
||||
config ARCH_SM8150
|
||||
bool "Enable Support for Qualcomm Technologies, Inc. SM8150"
|
||||
depends on ARCH_QCOM
|
||||
select COMMON_CLK
|
||||
select COMMON_CLK_QCOM
|
||||
help
|
||||
This enables support for the SM8150 chipset. If you do not
|
||||
@@ -152,6 +157,7 @@ config ARCH_SM8150
|
||||
config ARCH_SDMSHRIKE
|
||||
bool "Enable Support for Qualcomm Technologies, Inc. SDMSHRIKE"
|
||||
depends on ARCH_QCOM
|
||||
select COMMON_CLK
|
||||
select COMMON_CLK_QCOM
|
||||
help
|
||||
This configuration option enables support to build kernel for
|
||||
@@ -162,6 +168,7 @@ config ARCH_SDMSHRIKE
|
||||
config ARCH_SM6150
|
||||
bool "Enable Support for Qualcomm Technologies, Inc. SM6150"
|
||||
depends on ARCH_QCOM
|
||||
select COMMON_CLK
|
||||
select COMMON_CLK_QCOM
|
||||
help
|
||||
This enables support for the SM6150 chipset. If you do not
|
||||
@@ -170,6 +177,7 @@ config ARCH_SM6150
|
||||
config ARCH_ATOLL
|
||||
bool "Enable Support for Qualcomm Technologies, Inc. ATOLL"
|
||||
depends on ARCH_QCOM
|
||||
select COMMON_CLK
|
||||
select COMMON_CLK_QCOM
|
||||
select QTI_PDC_ATOLL
|
||||
help
|
||||
@@ -179,6 +187,7 @@ config ARCH_ATOLL
|
||||
config ARCH_QCS405
|
||||
bool "Enable Support for Qualcomm Technologies, Inc. QCS405"
|
||||
depends on ARCH_QCOM
|
||||
select COMMON_CLK
|
||||
select COMMON_CLK_QCOM
|
||||
help
|
||||
This configuration option enables support to build kernel for
|
||||
@@ -189,7 +198,8 @@ config ARCH_QCS405
|
||||
config ARCH_QCS403
|
||||
bool "Enable Support for Qualcomm Technologies, Inc. QCS403"
|
||||
depends on ARCH_QCOM
|
||||
select COMMON_CLK_QCOM
|
||||
select COMMON_CLK
|
||||
select COMMON_CLK_QCOM
|
||||
help
|
||||
This configuration option enables support to build kernel for
|
||||
QCS403 SoC.
|
||||
@@ -199,6 +209,7 @@ config ARCH_QCS403
|
||||
config ARCH_SDMMAGPIE
|
||||
bool "Enable Support for Qualcomm Technologies, Inc. SDMMAGPIE"
|
||||
depends on ARCH_QCOM
|
||||
select COMMON_CLK
|
||||
select COMMON_CLK_QCOM
|
||||
help
|
||||
This enables support for the SDMMAGPIE chipset. If you do not
|
||||
@@ -207,6 +218,7 @@ config ARCH_SDMMAGPIE
|
||||
config ARCH_TRINKET
|
||||
bool "Enable Support for Qualcomm Technologies, Inc. TRINKET"
|
||||
depends on ARCH_QCOM
|
||||
select COMMON_CLK
|
||||
select COMMON_CLK_QCOM
|
||||
help
|
||||
This enables support for the TRINKET chipset. If you do not
|
||||
|
||||
@@ -242,3 +242,5 @@ source "drivers/clk/ti/Kconfig"
|
||||
source "drivers/clk/uniphier/Kconfig"
|
||||
|
||||
endmenu
|
||||
|
||||
source "drivers/clk/msm/Kconfig"
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
# common clock types
|
||||
obj-$(CONFIG_HAVE_CLK) += clk-devres.o clk-bulk.o
|
||||
obj-$(CONFIG_CLKDEV_LOOKUP) += clkdev.o
|
||||
obj-$(CONFIG_COMMON_CLK) += clk.o
|
||||
obj-$(CONFIG_OF) += clk.o
|
||||
obj-$(CONFIG_COMMON_CLK) += clk-divider.o
|
||||
obj-$(CONFIG_COMMON_CLK) += clk-fixed-factor.o
|
||||
obj-$(CONFIG_COMMON_CLK) += clk-fixed-rate.o
|
||||
@@ -98,3 +98,4 @@ obj-$(CONFIG_X86) += x86/
|
||||
endif
|
||||
obj-$(CONFIG_ARCH_ZX) += zte/
|
||||
obj-$(CONFIG_ARCH_ZYNQ) += zynq/
|
||||
obj-$(CONFIG_ARCH_QCOM) += msm/
|
||||
|
||||
@@ -30,6 +30,8 @@
|
||||
|
||||
#include "clk.h"
|
||||
|
||||
#if defined(CONFIG_COMMON_CLK)
|
||||
|
||||
static DEFINE_SPINLOCK(enable_lock);
|
||||
static DEFINE_MUTEX(prepare_lock);
|
||||
|
||||
@@ -4453,6 +4455,8 @@ int clk_notifier_unregister(struct clk *clk, struct notifier_block *nb)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(clk_notifier_unregister);
|
||||
|
||||
#endif /* CONFIG_COMMON_CLK */
|
||||
|
||||
#ifdef CONFIG_OF
|
||||
/**
|
||||
* struct of_clk_provider - Clock provider registration structure
|
||||
@@ -4490,6 +4494,8 @@ struct clk_hw *of_clk_hw_simple_get(struct of_phandle_args *clkspec, void *data)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(of_clk_hw_simple_get);
|
||||
|
||||
#if defined(CONFIG_COMMON_CLK)
|
||||
|
||||
struct clk *of_clk_src_onecell_get(struct of_phandle_args *clkspec, void *data)
|
||||
{
|
||||
struct clk_onecell_data *clk_data = data;
|
||||
@@ -4519,6 +4525,29 @@ of_clk_hw_onecell_get(struct of_phandle_args *clkspec, void *data)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(of_clk_hw_onecell_get);
|
||||
|
||||
#endif /* CONFIG_COMMON_CLK */
|
||||
|
||||
/**
|
||||
* of_clk_del_provider() - Remove a previously registered clock provider
|
||||
* @np: Device node pointer associated with clock provider
|
||||
*/
|
||||
void of_clk_del_provider(struct device_node *np)
|
||||
{
|
||||
struct of_clk_provider *cp;
|
||||
|
||||
mutex_lock(&of_clk_mutex);
|
||||
list_for_each_entry(cp, &of_clk_providers, link) {
|
||||
if (cp->node == np) {
|
||||
list_del(&cp->link);
|
||||
of_node_put(cp->node);
|
||||
kfree(cp);
|
||||
break;
|
||||
}
|
||||
}
|
||||
mutex_unlock(&of_clk_mutex);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(of_clk_del_provider);
|
||||
|
||||
/**
|
||||
* of_clk_add_provider() - Register a clock provider for a node
|
||||
* @np: Device node pointer associated with clock provider
|
||||
@@ -4620,27 +4649,6 @@ int devm_of_clk_add_hw_provider(struct device *dev,
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(devm_of_clk_add_hw_provider);
|
||||
|
||||
/**
|
||||
* of_clk_del_provider() - Remove a previously registered clock provider
|
||||
* @np: Device node pointer associated with clock provider
|
||||
*/
|
||||
void of_clk_del_provider(struct device_node *np)
|
||||
{
|
||||
struct of_clk_provider *cp;
|
||||
|
||||
mutex_lock(&of_clk_mutex);
|
||||
list_for_each_entry(cp, &of_clk_providers, link) {
|
||||
if (cp->node == np) {
|
||||
list_del(&cp->link);
|
||||
of_node_put(cp->node);
|
||||
kfree(cp);
|
||||
break;
|
||||
}
|
||||
}
|
||||
mutex_unlock(&of_clk_mutex);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(of_clk_del_provider);
|
||||
|
||||
static int devm_clk_provider_match(struct device *dev, void *res, void *data)
|
||||
{
|
||||
struct device_node **np = res;
|
||||
@@ -4790,8 +4798,10 @@ const char *of_clk_get_parent_name(struct device_node *np, int index)
|
||||
else
|
||||
clk_name = NULL;
|
||||
} else {
|
||||
#if defined(CONFIG_COMMON_CLK)
|
||||
clk_name = __clk_get_name(clk);
|
||||
clk_put(clk);
|
||||
#endif
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4822,6 +4832,8 @@ int of_clk_parent_fill(struct device_node *np, const char **parents,
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(of_clk_parent_fill);
|
||||
|
||||
#if defined(CONFIG_COMMON_CLK)
|
||||
|
||||
struct clock_provider {
|
||||
of_clk_init_cb_t clk_init_cb;
|
||||
struct device_node *np;
|
||||
@@ -4972,4 +4984,7 @@ void __init of_clk_init(const struct of_device_id *matches)
|
||||
force = true;
|
||||
}
|
||||
}
|
||||
|
||||
#endif /* CONFIG_COMMON_CLK */
|
||||
|
||||
#endif
|
||||
|
||||
@@ -12,7 +12,7 @@
|
||||
struct clk_hw;
|
||||
struct clk_core;
|
||||
|
||||
#if defined(CONFIG_OF) && defined(CONFIG_COMMON_CLK)
|
||||
#if defined(CONFIG_OF)
|
||||
struct clk *__of_clk_get_from_provider(struct of_phandle_args *clkspec,
|
||||
const char *dev_id, const char *con_id);
|
||||
#endif
|
||||
@@ -54,5 +54,4 @@ static struct clk_hw *__clk_get_hw(struct clk *clk)
|
||||
return (struct clk_hw *)clk;
|
||||
}
|
||||
|
||||
void clock_debug_print_enabled(void) {}
|
||||
#endif
|
||||
|
||||
@@ -27,7 +27,7 @@
|
||||
static LIST_HEAD(clocks);
|
||||
static DEFINE_MUTEX(clocks_mutex);
|
||||
|
||||
#if defined(CONFIG_OF) && defined(CONFIG_COMMON_CLK)
|
||||
#if defined(CONFIG_OF)
|
||||
static struct clk *__of_clk_get(struct device_node *np, int index,
|
||||
const char *dev_id, const char *con_id)
|
||||
{
|
||||
@@ -73,14 +73,10 @@ static struct clk *__of_clk_get_by_name(struct device_node *np,
|
||||
if (name)
|
||||
index = of_property_match_string(np, "clock-names", name);
|
||||
clk = __of_clk_get(np, index, dev_id, name);
|
||||
if (!IS_ERR(clk)) {
|
||||
if (!IS_ERR(clk))
|
||||
break;
|
||||
} else if (name && index >= 0) {
|
||||
if (PTR_ERR(clk) != -EPROBE_DEFER)
|
||||
pr_err("ERROR: could not get clock %pOF:%s(%i)\n",
|
||||
np, name ? name : "", index);
|
||||
else if (name && index >= 0)
|
||||
return clk;
|
||||
}
|
||||
|
||||
/*
|
||||
* No matching clock found on this node. If the parent node
|
||||
@@ -190,7 +186,7 @@ struct clk *clk_get_sys(const char *dev_id, const char *con_id)
|
||||
out:
|
||||
mutex_unlock(&clocks_mutex);
|
||||
|
||||
return cl ? clk : ERR_PTR(-ENOENT);
|
||||
return cl ? cl->clk : ERR_PTR(-ENOENT);
|
||||
}
|
||||
EXPORT_SYMBOL(clk_get_sys);
|
||||
|
||||
|
||||
18
drivers/clk/msm/Kconfig
Normal file
18
drivers/clk/msm/Kconfig
Normal file
@@ -0,0 +1,18 @@
|
||||
config COMMON_CLK_MSM
|
||||
tristate "Support for MSM clock controllers"
|
||||
depends on OF
|
||||
depends on ARCH_QCOM
|
||||
select RATIONAL
|
||||
select ARCH_HAS_RESET_CONTROLLER
|
||||
help
|
||||
This support clock controller used by MSM devices which support
|
||||
global, mmss and gpu clock controller.
|
||||
Say Y if you want to support the clocks exposed by the MSM on
|
||||
platforms such as msm8953 etc.
|
||||
|
||||
config MSM_CLK_CONTROLLER_V2
|
||||
bool "QTI clock driver"
|
||||
depends on COMMON_CLK_MSM
|
||||
---help---
|
||||
Generate clock data structures from definitions found in
|
||||
device tree.
|
||||
20
drivers/clk/msm/Makefile
Normal file
20
drivers/clk/msm/Makefile
Normal file
@@ -0,0 +1,20 @@
|
||||
obj-$(CONFIG_COMMON_CLK_MSM) += clock.o
|
||||
obj-$(CONFIG_COMMON_CLK_MSM) += clock-dummy.o
|
||||
obj-$(CONFIG_COMMON_CLK_MSM) += clock-generic.o
|
||||
obj-$(CONFIG_COMMON_CLK_MSM) += clock-local2.o
|
||||
obj-$(CONFIG_COMMON_CLK_MSM) += clock-pll.o
|
||||
obj-$(CONFIG_COMMON_CLK_MSM) += clock-alpha-pll.o
|
||||
obj-$(CONFIG_COMMON_CLK_MSM) += clock-rpm.o
|
||||
obj-$(CONFIG_COMMON_CLK_MSM) += clock-voter.o
|
||||
obj-$(CONFIG_COMMON_CLK_MSM) += reset.o
|
||||
obj-$(CONFIG_COMMON_CLK_MSM) += clock-debug.o
|
||||
obj-$(CONFIG_COMMON_CLK_MSM) += gdsc.o
|
||||
|
||||
obj-$(CONFIG_MSM_CLK_CONTROLLER_V2) += msm-clock-controller.o
|
||||
|
||||
ifeq ($(CONFIG_COMMON_CLK_MSM), y)
|
||||
# MDM9607
|
||||
obj-$(CONFIG_ARCH_MDM9607) += clock-gcc-9607.o
|
||||
# ACPU clock
|
||||
obj-$(CONFIG_ARCH_MDM9607) += clock-a7.o
|
||||
endif
|
||||
502
drivers/clk/msm/clock-a7.c
Normal file
502
drivers/clk/msm/clock-a7.c
Normal file
@@ -0,0 +1,502 @@
|
||||
/* Copyright (c) 2013-2019, The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 and
|
||||
* only version 2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#define pr_fmt(fmt) "%s: " fmt, __func__
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/cpu.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/regulator/consumer.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/clk/msm-clock-generic.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/pm_opp.h>
|
||||
#include <soc/qcom/clock-local2.h>
|
||||
#include <dt-bindings/clock/msm-clocks-a7.h>
|
||||
|
||||
#include "clock.h"
|
||||
|
||||
static DEFINE_VDD_REGS_INIT(vdd_cpu, 1);
|
||||
|
||||
static struct mux_div_clk a7ssmux = {
|
||||
.ops = &rcg_mux_div_ops,
|
||||
.safe_freq = 300000000,
|
||||
.data = {
|
||||
.max_div = 32,
|
||||
.min_div = 2,
|
||||
.is_half_divider = true,
|
||||
},
|
||||
.c = {
|
||||
.dbg_name = "a7ssmux",
|
||||
.ops = &clk_ops_mux_div_clk,
|
||||
.vdd_class = &vdd_cpu,
|
||||
CLK_INIT(a7ssmux.c),
|
||||
},
|
||||
.parents = (struct clk_src[8]) {},
|
||||
.div_mask = BM(4, 0),
|
||||
.src_mask = BM(10, 8) >> 8,
|
||||
.src_shift = 8,
|
||||
.en_mask = 1,
|
||||
};
|
||||
|
||||
static struct clk_lookup clock_tbl_a7[] = {
|
||||
CLK_LIST(a7ssmux),
|
||||
CLK_LOOKUP_OF("cpu0_clk", a7ssmux, "fe805664.qcom,pm"),
|
||||
CLK_LOOKUP_OF("cpu1_clk", a7ssmux, "fe805664.qcom,pm"),
|
||||
CLK_LOOKUP_OF("cpu2_clk", a7ssmux, "fe805664.qcom,pm"),
|
||||
CLK_LOOKUP_OF("cpu3_clk", a7ssmux, "fe805664.qcom,pm"),
|
||||
CLK_LOOKUP_OF("cpu0_clk", a7ssmux, "8600664.qcom,pm"),
|
||||
CLK_LOOKUP_OF("cpu1_clk", a7ssmux, "8600664.qcom,pm"),
|
||||
CLK_LOOKUP_OF("cpu2_clk", a7ssmux, "8600664.qcom,pm"),
|
||||
CLK_LOOKUP_OF("cpu3_clk", a7ssmux, "8600664.qcom,pm"),
|
||||
};
|
||||
|
||||
static void print_opp_table(int a7_cpu)
|
||||
{
|
||||
struct dev_pm_opp *oppfmax, *oppfmin;
|
||||
unsigned long apc0_fmax = a7ssmux.c.fmax[a7ssmux.c.num_fmax - 1];
|
||||
unsigned long apc0_fmin = a7ssmux.c.fmax[1];
|
||||
|
||||
rcu_read_lock();
|
||||
oppfmax = dev_pm_opp_find_freq_exact(get_cpu_device(a7_cpu), apc0_fmax,
|
||||
true);
|
||||
oppfmin = dev_pm_opp_find_freq_exact(get_cpu_device(a7_cpu), apc0_fmin,
|
||||
true);
|
||||
|
||||
/* One time information during boot. */
|
||||
pr_info("clock_cpu: a7: OPP voltage for %lu: %ld\n", apc0_fmin,
|
||||
dev_pm_opp_get_voltage(oppfmin));
|
||||
pr_info("clock_cpu: a7: OPP voltage for %lu: %ld\n", apc0_fmax,
|
||||
dev_pm_opp_get_voltage(oppfmax));
|
||||
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
static int add_opp(struct clk *c, struct device *dev,
|
||||
unsigned long max_rate)
|
||||
{
|
||||
unsigned long rate = 0;
|
||||
int level;
|
||||
int uv;
|
||||
long ret;
|
||||
bool first = true;
|
||||
int j = 1;
|
||||
|
||||
while (1) {
|
||||
rate = c->fmax[j++];
|
||||
|
||||
level = find_vdd_level(c, rate);
|
||||
if (level <= 0) {
|
||||
pr_warn("clock-cpu: no corner for %lu\n", rate);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
uv = c->vdd_class->vdd_uv[level];
|
||||
if (uv < 0) {
|
||||
pr_warn("clock-cpu: no uv for %lu\n", rate);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
ret = dev_pm_opp_add(dev, rate, uv);
|
||||
if (ret) {
|
||||
pr_warn("clock-cpu: failed to add OPP for %lu\n",
|
||||
rate);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* The OPP pair for the lowest and highest frequency for
|
||||
* each device that we're populating. This is important since
|
||||
* this information will be used by thermal mitigation and the
|
||||
* scheduler.
|
||||
*/
|
||||
if ((rate >= max_rate) || first) {
|
||||
if (first)
|
||||
first = false;
|
||||
else
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void populate_opp_table(struct platform_device *pdev)
|
||||
{
|
||||
struct platform_device *apc_dev;
|
||||
struct device_node *apc_node;
|
||||
struct device *dev;
|
||||
unsigned long apc_fmax;
|
||||
int cpu, a7_cpu = 0;
|
||||
|
||||
apc_node = of_parse_phandle(pdev->dev.of_node, "cpu-vdd-supply", 0);
|
||||
if (!apc_node) {
|
||||
pr_err("can't find the apc0 dt node.\n");
|
||||
return;
|
||||
}
|
||||
|
||||
apc_dev = of_find_device_by_node(apc_node);
|
||||
if (!apc_dev) {
|
||||
pr_err("can't find the apc0 device node.\n");
|
||||
return;
|
||||
}
|
||||
|
||||
apc_fmax = a7ssmux.c.fmax[a7ssmux.c.num_fmax - 1];
|
||||
|
||||
for_each_possible_cpu(cpu) {
|
||||
a7_cpu = cpu;
|
||||
dev = get_cpu_device(cpu);
|
||||
if (!dev) {
|
||||
pr_err("can't find cpu device for attaching OPPs\n");
|
||||
return;
|
||||
}
|
||||
|
||||
WARN(add_opp(&a7ssmux.c, dev, apc_fmax),
|
||||
"Failed to add OPP levels for A7\n");
|
||||
}
|
||||
|
||||
/* One time print during bootup */
|
||||
pr_info("clock-a7: OPP tables populated (cpu %d)\n", a7_cpu);
|
||||
|
||||
print_opp_table(a7_cpu);
|
||||
}
|
||||
|
||||
static int of_get_fmax_vdd_class(struct platform_device *pdev, struct clk *c,
|
||||
char *prop_name)
|
||||
{
|
||||
struct device_node *of = pdev->dev.of_node;
|
||||
int prop_len, i;
|
||||
struct clk_vdd_class *vdd = c->vdd_class;
|
||||
u32 *array;
|
||||
|
||||
if (!of_find_property(of, prop_name, &prop_len)) {
|
||||
dev_err(&pdev->dev, "missing %s\n", prop_name);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
prop_len /= sizeof(u32);
|
||||
if (prop_len % 2) {
|
||||
dev_err(&pdev->dev, "bad length %d\n", prop_len);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
prop_len /= 2;
|
||||
vdd->level_votes = devm_kzalloc(&pdev->dev, prop_len * sizeof(int),
|
||||
GFP_KERNEL);
|
||||
if (!vdd->level_votes)
|
||||
return -ENOMEM;
|
||||
|
||||
vdd->vdd_uv = devm_kzalloc(&pdev->dev, prop_len * sizeof(int),
|
||||
GFP_KERNEL);
|
||||
if (!vdd->vdd_uv)
|
||||
return -ENOMEM;
|
||||
|
||||
c->fmax = devm_kzalloc(&pdev->dev, prop_len * sizeof(unsigned long),
|
||||
GFP_KERNEL);
|
||||
if (!c->fmax)
|
||||
return -ENOMEM;
|
||||
|
||||
array = devm_kzalloc(&pdev->dev,
|
||||
prop_len * sizeof(u32) * 2, GFP_KERNEL);
|
||||
if (!array)
|
||||
return -ENOMEM;
|
||||
|
||||
of_property_read_u32_array(of, prop_name, array, prop_len * 2);
|
||||
for (i = 0; i < prop_len; i++) {
|
||||
c->fmax[i] = array[2 * i];
|
||||
vdd->vdd_uv[i] = array[2 * i + 1];
|
||||
}
|
||||
|
||||
devm_kfree(&pdev->dev, array);
|
||||
vdd->num_levels = prop_len;
|
||||
vdd->cur_level = prop_len;
|
||||
vdd->use_max_uV = true;
|
||||
c->num_fmax = prop_len;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void get_speed_bin(struct platform_device *pdev, int *bin, int *version)
|
||||
{
|
||||
struct resource *res;
|
||||
void __iomem *base;
|
||||
u32 pte_efuse, redundant_sel, valid;
|
||||
|
||||
*bin = 0;
|
||||
*version = 0;
|
||||
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "efuse");
|
||||
if (!res) {
|
||||
dev_info(&pdev->dev,
|
||||
"No speed/PVS binning available. Defaulting to 0!\n");
|
||||
return;
|
||||
}
|
||||
|
||||
base = devm_ioremap(&pdev->dev, res->start, resource_size(res));
|
||||
if (!base) {
|
||||
dev_warn(&pdev->dev,
|
||||
"Unable to read efuse data. Defaulting to 0!\n");
|
||||
return;
|
||||
}
|
||||
|
||||
pte_efuse = readl_relaxed(base);
|
||||
devm_iounmap(&pdev->dev, base);
|
||||
|
||||
redundant_sel = (pte_efuse >> 24) & 0x7;
|
||||
*bin = pte_efuse & 0x7;
|
||||
valid = (pte_efuse >> 3) & 0x1;
|
||||
*version = (pte_efuse >> 4) & 0x3;
|
||||
|
||||
if (redundant_sel == 1)
|
||||
*bin = (pte_efuse >> 27) & 0x7;
|
||||
|
||||
if (!valid) {
|
||||
dev_info(&pdev->dev, "Speed bin not set. Defaulting to 0!\n");
|
||||
*bin = 0;
|
||||
} else {
|
||||
dev_info(&pdev->dev, "Speed bin: %d\n", *bin);
|
||||
}
|
||||
|
||||
dev_info(&pdev->dev, "PVS version: %d\n", *version);
|
||||
|
||||
}
|
||||
|
||||
static void get_speed_bin_b(struct platform_device *pdev, int *bin,
|
||||
int *version)
|
||||
{
|
||||
struct resource *res;
|
||||
void __iomem *base;
|
||||
u32 pte_efuse, shift = 2, mask = 0x7;
|
||||
|
||||
*bin = 0;
|
||||
*version = 0;
|
||||
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "efuse1");
|
||||
if (res) {
|
||||
base = devm_ioremap(&pdev->dev, res->start, resource_size(res));
|
||||
if (base) {
|
||||
pte_efuse = readl_relaxed(base);
|
||||
devm_iounmap(&pdev->dev, base);
|
||||
|
||||
*version = (pte_efuse >> 18) & 0x3;
|
||||
if (!(*version)) {
|
||||
*bin = (pte_efuse >> 23) & 0x3;
|
||||
if (*bin) {
|
||||
dev_info(&pdev->dev, "Speed bin: %d PVS Version: %d\n",
|
||||
*bin, *version);
|
||||
return;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
dev_warn(&pdev->dev,
|
||||
"Unable to read efuse1 data. Defaulting to 0!\n");
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "efuse");
|
||||
if (!res) {
|
||||
dev_info(&pdev->dev,
|
||||
"No speed/PVS binning available. Defaulting to 0!\n");
|
||||
return;
|
||||
}
|
||||
base = devm_ioremap(&pdev->dev, res->start, resource_size(res));
|
||||
if (!base) {
|
||||
dev_warn(&pdev->dev,
|
||||
"Unable to read efuse data. Defaulting to 0!\n");
|
||||
return;
|
||||
}
|
||||
|
||||
pte_efuse = readl_relaxed(base);
|
||||
devm_iounmap(&pdev->dev, base);
|
||||
|
||||
*bin = (pte_efuse >> shift) & mask;
|
||||
|
||||
dev_info(&pdev->dev, "Speed bin: %d PVS Version: %d\n", *bin,
|
||||
*version);
|
||||
}
|
||||
|
||||
static int of_get_clk_src(struct platform_device *pdev, struct clk_src *parents)
|
||||
{
|
||||
struct device_node *of = pdev->dev.of_node;
|
||||
int num_parents, i, j, index;
|
||||
struct clk *c;
|
||||
char clk_name[] = "clk-x";
|
||||
|
||||
num_parents = of_property_count_strings(of, "clock-names");
|
||||
if (num_parents <= 0 || num_parents > 8) {
|
||||
dev_err(&pdev->dev, "missing clock-names\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
j = 0;
|
||||
for (i = 0; i < 8; i++) {
|
||||
snprintf(clk_name, ARRAY_SIZE(clk_name), "clk-%d", i);
|
||||
index = of_property_match_string(of, "clock-names", clk_name);
|
||||
if (IS_ERR_VALUE(index))
|
||||
continue;
|
||||
|
||||
parents[j].sel = i;
|
||||
parents[j].src = c = devm_clk_get(&pdev->dev, clk_name);
|
||||
if (IS_ERR(c)) {
|
||||
if (c != ERR_PTR(-EPROBE_DEFER))
|
||||
dev_err(&pdev->dev, "clk_get: %s\n fail",
|
||||
clk_name);
|
||||
return PTR_ERR(c);
|
||||
}
|
||||
j++;
|
||||
}
|
||||
|
||||
return num_parents;
|
||||
}
|
||||
|
||||
static struct platform_device *cpu_clock_a7_dev;
|
||||
|
||||
static int clock_a7_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct resource *res;
|
||||
int speed_bin = 0, version = 0, rc, cpu;
|
||||
unsigned long rate, aux_rate;
|
||||
struct clk *aux_clk, *main_pll;
|
||||
char prop_name[] = "qcom,speedX-bin-vX";
|
||||
const void *prop;
|
||||
bool compat_bin = false;
|
||||
bool compat_bin2 = false;
|
||||
bool opp_enable;
|
||||
|
||||
compat_bin = of_device_is_compatible(pdev->dev.of_node,
|
||||
"qcom,clock-a53-8916");
|
||||
compat_bin2 = of_device_is_compatible(pdev->dev.of_node,
|
||||
"qcom,clock-a7-mdm9607");
|
||||
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "rcg-base");
|
||||
if (!res) {
|
||||
dev_err(&pdev->dev, "missing rcg-base\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
a7ssmux.base = devm_ioremap(&pdev->dev, res->start, resource_size(res));
|
||||
if (!a7ssmux.base) {
|
||||
dev_err(&pdev->dev, "ioremap failed for rcg-base\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
vdd_cpu.regulator[0] = devm_regulator_get(&pdev->dev, "cpu-vdd");
|
||||
if (IS_ERR(vdd_cpu.regulator[0])) {
|
||||
if (PTR_ERR(vdd_cpu.regulator[0]) != -EPROBE_DEFER)
|
||||
dev_err(&pdev->dev, "unable to get regulator\n");
|
||||
return PTR_ERR(vdd_cpu.regulator[0]);
|
||||
}
|
||||
|
||||
rc = of_get_clk_src(pdev, a7ssmux.parents);
|
||||
if (IS_ERR_VALUE(rc))
|
||||
return rc;
|
||||
|
||||
a7ssmux.num_parents = rc;
|
||||
|
||||
/* Override the existing safe operating frequency */
|
||||
prop = of_get_property(pdev->dev.of_node, "qcom,safe-freq", NULL);
|
||||
if (prop)
|
||||
a7ssmux.safe_freq = of_read_ulong(prop, 1);
|
||||
|
||||
if (compat_bin || compat_bin2)
|
||||
get_speed_bin_b(pdev, &speed_bin, &version);
|
||||
else
|
||||
get_speed_bin(pdev, &speed_bin, &version);
|
||||
|
||||
snprintf(prop_name, ARRAY_SIZE(prop_name),
|
||||
"qcom,speed%d-bin-v%d", speed_bin, version);
|
||||
rc = of_get_fmax_vdd_class(pdev, &a7ssmux.c, prop_name);
|
||||
if (rc) {
|
||||
/* Fall back to most conservative PVS table */
|
||||
dev_err(&pdev->dev, "Unable to load voltage plan %s!\n",
|
||||
prop_name);
|
||||
rc = of_get_fmax_vdd_class(pdev, &a7ssmux.c,
|
||||
"qcom,speed0-bin-v0");
|
||||
if (rc) {
|
||||
dev_err(&pdev->dev,
|
||||
"Unable to load safe voltage plan\n");
|
||||
return rc;
|
||||
}
|
||||
dev_info(&pdev->dev, "Safe voltage plan loaded.\n");
|
||||
}
|
||||
|
||||
rc = of_msm_clock_register(pdev->dev.of_node,
|
||||
clock_tbl_a7, ARRAY_SIZE(clock_tbl_a7));
|
||||
if (rc) {
|
||||
dev_err(&pdev->dev, "msm_clock_register failed\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
/* Force a PLL reconfiguration */
|
||||
aux_clk = a7ssmux.parents[0].src;
|
||||
main_pll = a7ssmux.parents[1].src;
|
||||
|
||||
aux_rate = clk_get_rate(aux_clk);
|
||||
rate = clk_get_rate(&a7ssmux.c);
|
||||
clk_set_rate(&a7ssmux.c, aux_rate);
|
||||
clk_set_rate(main_pll, clk_round_rate(main_pll, 1));
|
||||
clk_set_rate(&a7ssmux.c, rate);
|
||||
|
||||
/*
|
||||
* We don't want the CPU clocks to be turned off at late init
|
||||
* if CPUFREQ or HOTPLUG configs are disabled. So, bump up the
|
||||
* refcount of these clocks. Any cpufreq/hotplug manager can assume
|
||||
* that the clocks have already been prepared and enabled by the time
|
||||
* they take over.
|
||||
*/
|
||||
get_online_cpus();
|
||||
for_each_online_cpu(cpu)
|
||||
WARN(clk_prepare_enable(&a7ssmux.c),
|
||||
"Unable to turn on CPU clock");
|
||||
put_online_cpus();
|
||||
|
||||
opp_enable = of_property_read_bool(pdev->dev.of_node,
|
||||
"qcom,enable-opp");
|
||||
if (opp_enable)
|
||||
cpu_clock_a7_dev = pdev;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct of_device_id clock_a7_match_table[] = {
|
||||
{.compatible = "qcom,clock-a7-mdm9607"},
|
||||
{}
|
||||
};
|
||||
|
||||
static struct platform_driver clock_a7_driver = {
|
||||
.probe = clock_a7_probe,
|
||||
.driver = {
|
||||
.name = "clock-a7",
|
||||
.of_match_table = clock_a7_match_table,
|
||||
.owner = THIS_MODULE,
|
||||
},
|
||||
};
|
||||
|
||||
static int __init clock_a7_init(void)
|
||||
{
|
||||
return platform_driver_register(&clock_a7_driver);
|
||||
}
|
||||
arch_initcall(clock_a7_init);
|
||||
|
||||
/* CPU devices are not currently available in arch_initcall */
|
||||
static int __init cpu_clock_a7_init_opp(void)
|
||||
{
|
||||
if (cpu_clock_a7_dev)
|
||||
populate_opp_table(cpu_clock_a7_dev);
|
||||
return 0;
|
||||
}
|
||||
module_init(cpu_clock_a7_init_opp);
|
||||
1270
drivers/clk/msm/clock-alpha-pll.c
Normal file
1270
drivers/clk/msm/clock-alpha-pll.c
Normal file
File diff suppressed because it is too large
Load Diff
720
drivers/clk/msm/clock-debug.c
Normal file
720
drivers/clk/msm/clock-debug.c
Normal file
@@ -0,0 +1,720 @@
|
||||
/*
|
||||
* Copyright (C) 2007 Google, Inc.
|
||||
* Copyright (c) 2007-2014, 2017, The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* This software is licensed under the terms of the GNU General Public
|
||||
* License version 2, as published by the Free Software Foundation, and
|
||||
* may be copied, distributed, and modified under those terms.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/ctype.h>
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/seq_file.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/clkdev.h>
|
||||
#include <linux/uaccess.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/clk/msm-clk-provider.h>
|
||||
#include <trace/events/power.h>
|
||||
|
||||
|
||||
#include "clock.h"
|
||||
|
||||
static LIST_HEAD(clk_list);
|
||||
static DEFINE_MUTEX(clk_list_lock);
|
||||
|
||||
static struct dentry *debugfs_base;
|
||||
static u32 debug_suspend;
|
||||
|
||||
static int clock_debug_rate_set(void *data, u64 val)
|
||||
{
|
||||
struct clk *clock = data;
|
||||
int ret;
|
||||
|
||||
/* Only increases to max rate will succeed, but that's actually good
|
||||
* for debugging purposes so we don't check for error.
|
||||
*/
|
||||
if (clock->flags & CLKFLAG_MAX)
|
||||
clk_set_max_rate(clock, val);
|
||||
ret = clk_set_rate(clock, val);
|
||||
if (ret)
|
||||
pr_err("clk_set_rate(%s, %lu) failed (%d)\n", clock->dbg_name,
|
||||
(unsigned long)val, ret);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int clock_debug_rate_get(void *data, u64 *val)
|
||||
{
|
||||
struct clk *clock = data;
|
||||
*val = clk_get_rate(clock);
|
||||
return 0;
|
||||
}
|
||||
|
||||
DEFINE_SIMPLE_ATTRIBUTE(clock_rate_fops, clock_debug_rate_get,
|
||||
clock_debug_rate_set, "%llu\n");
|
||||
|
||||
static struct clk *measure;
|
||||
|
||||
static int clock_debug_measure_get(void *data, u64 *val)
|
||||
{
|
||||
struct clk *clock = data, *par;
|
||||
int ret, is_hw_gated;
|
||||
unsigned long meas_rate, sw_rate;
|
||||
|
||||
/* Check to see if the clock is in hardware gating mode */
|
||||
if (clock->ops->in_hwcg_mode)
|
||||
is_hw_gated = clock->ops->in_hwcg_mode(clock);
|
||||
else
|
||||
is_hw_gated = 0;
|
||||
|
||||
ret = clk_set_parent(measure, clock);
|
||||
if (!ret) {
|
||||
/*
|
||||
* Disable hw gating to get accurate rate measurements. Only do
|
||||
* this if the clock is explicitly enabled by software. This
|
||||
* allows us to detect errors where clocks are on even though
|
||||
* software is not requesting them to be on due to broken
|
||||
* hardware gating signals.
|
||||
*/
|
||||
if (is_hw_gated && clock->count)
|
||||
clock->ops->disable_hwcg(clock);
|
||||
par = measure;
|
||||
while (par && par != clock) {
|
||||
if (par->ops->enable)
|
||||
par->ops->enable(par);
|
||||
par = par->parent;
|
||||
}
|
||||
*val = clk_get_rate(measure);
|
||||
/* Reenable hwgating if it was disabled */
|
||||
if (is_hw_gated && clock->count)
|
||||
clock->ops->enable_hwcg(clock);
|
||||
}
|
||||
|
||||
/*
|
||||
* If there's a divider on the path from the clock output to the
|
||||
* measurement circuitry, account for it by dividing the original clock
|
||||
* rate with the rate set on the parent of the measure clock.
|
||||
*/
|
||||
meas_rate = clk_get_rate(clock);
|
||||
sw_rate = clk_get_rate(measure->parent);
|
||||
if (sw_rate && meas_rate >= (sw_rate * 2))
|
||||
*val *= DIV_ROUND_CLOSEST(meas_rate, sw_rate);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
DEFINE_SIMPLE_ATTRIBUTE(clock_measure_fops, clock_debug_measure_get,
|
||||
NULL, "%lld\n");
|
||||
|
||||
static int clock_debug_enable_set(void *data, u64 val)
|
||||
{
|
||||
struct clk *clock = data;
|
||||
int rc = 0;
|
||||
|
||||
if (val)
|
||||
rc = clk_prepare_enable(clock);
|
||||
else
|
||||
clk_disable_unprepare(clock);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int clock_debug_enable_get(void *data, u64 *val)
|
||||
{
|
||||
struct clk *clock = data;
|
||||
int enabled;
|
||||
|
||||
if (clock->ops->is_enabled)
|
||||
enabled = clock->ops->is_enabled(clock);
|
||||
else
|
||||
enabled = !!(clock->count);
|
||||
|
||||
*val = enabled;
|
||||
return 0;
|
||||
}
|
||||
|
||||
DEFINE_SIMPLE_ATTRIBUTE(clock_enable_fops, clock_debug_enable_get,
|
||||
clock_debug_enable_set, "%lld\n");
|
||||
|
||||
static int clock_debug_local_get(void *data, u64 *val)
|
||||
{
|
||||
struct clk *clock = data;
|
||||
|
||||
if (!clock->ops->is_local)
|
||||
*val = true;
|
||||
else
|
||||
*val = clock->ops->is_local(clock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
DEFINE_SIMPLE_ATTRIBUTE(clock_local_fops, clock_debug_local_get,
|
||||
NULL, "%llu\n");
|
||||
|
||||
static int clock_debug_hwcg_get(void *data, u64 *val)
|
||||
{
|
||||
struct clk *clock = data;
|
||||
|
||||
if (clock->ops->in_hwcg_mode)
|
||||
*val = !!clock->ops->in_hwcg_mode(clock);
|
||||
else
|
||||
*val = 0;
|
||||
return 0;
|
||||
}
|
||||
|
||||
DEFINE_SIMPLE_ATTRIBUTE(clock_hwcg_fops, clock_debug_hwcg_get,
|
||||
NULL, "%llu\n");
|
||||
|
||||
static void clock_print_fmax_by_level(struct seq_file *m, int level)
|
||||
{
|
||||
struct clk *clock = m->private;
|
||||
struct clk_vdd_class *vdd_class = clock->vdd_class;
|
||||
int off, i, vdd_level, nregs = vdd_class->num_regulators;
|
||||
|
||||
vdd_level = find_vdd_level(clock, clock->rate);
|
||||
|
||||
seq_printf(m, "%2s%10lu", vdd_level == level ? "[" : "",
|
||||
clock->fmax[level]);
|
||||
for (i = 0; i < nregs; i++) {
|
||||
off = nregs*level + i;
|
||||
if (vdd_class->vdd_uv)
|
||||
seq_printf(m, "%10u", vdd_class->vdd_uv[off]);
|
||||
if (vdd_class->vdd_ua)
|
||||
seq_printf(m, "%10u", vdd_class->vdd_ua[off]);
|
||||
}
|
||||
|
||||
if (vdd_level == level)
|
||||
seq_puts(m, "]");
|
||||
seq_puts(m, "\n");
|
||||
}
|
||||
|
||||
static int fmax_rates_show(struct seq_file *m, void *unused)
|
||||
{
|
||||
struct clk *clock = m->private;
|
||||
struct clk_vdd_class *vdd_class = clock->vdd_class;
|
||||
int level = 0, i, nregs = vdd_class->num_regulators;
|
||||
char reg_name[10];
|
||||
|
||||
int vdd_level = find_vdd_level(clock, clock->rate);
|
||||
|
||||
if (vdd_level < 0) {
|
||||
seq_printf(m, "could not find_vdd_level for %s, %ld\n",
|
||||
clock->dbg_name, clock->rate);
|
||||
return 0;
|
||||
}
|
||||
|
||||
seq_printf(m, "%12s", "");
|
||||
for (i = 0; i < nregs; i++) {
|
||||
snprintf(reg_name, ARRAY_SIZE(reg_name), "reg %d", i);
|
||||
seq_printf(m, "%10s", reg_name);
|
||||
if (vdd_class->vdd_ua)
|
||||
seq_printf(m, "%10s", "");
|
||||
}
|
||||
|
||||
seq_printf(m, "\n%12s", "freq");
|
||||
for (i = 0; i < nregs; i++) {
|
||||
seq_printf(m, "%10s", "uV");
|
||||
if (vdd_class->vdd_ua)
|
||||
seq_printf(m, "%10s", "uA");
|
||||
}
|
||||
seq_puts(m, "\n");
|
||||
|
||||
for (level = 0; level < clock->num_fmax; level++)
|
||||
clock_print_fmax_by_level(m, level);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int fmax_rates_open(struct inode *inode, struct file *file)
|
||||
{
|
||||
return single_open(file, fmax_rates_show, inode->i_private);
|
||||
}
|
||||
|
||||
static const struct file_operations fmax_rates_fops = {
|
||||
.open = fmax_rates_open,
|
||||
.read = seq_read,
|
||||
.llseek = seq_lseek,
|
||||
.release = seq_release,
|
||||
};
|
||||
|
||||
static int orphan_list_show(struct seq_file *m, void *unused)
|
||||
{
|
||||
struct clk *c, *safe;
|
||||
|
||||
list_for_each_entry_safe(c, safe, &orphan_clk_list, list)
|
||||
seq_printf(m, "%s\n", c->dbg_name);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int orphan_list_open(struct inode *inode, struct file *file)
|
||||
{
|
||||
return single_open(file, orphan_list_show, inode->i_private);
|
||||
}
|
||||
|
||||
static const struct file_operations orphan_list_fops = {
|
||||
.open = orphan_list_open,
|
||||
.read = seq_read,
|
||||
.llseek = seq_lseek,
|
||||
.release = seq_release,
|
||||
};
|
||||
|
||||
#define clock_debug_output(m, c, fmt, ...) \
|
||||
do { \
|
||||
if (m) \
|
||||
seq_printf(m, fmt, ##__VA_ARGS__); \
|
||||
else if (c) \
|
||||
pr_cont(fmt, ##__VA_ARGS__); \
|
||||
else \
|
||||
pr_info(fmt, ##__VA_ARGS__); \
|
||||
} while (0)
|
||||
|
||||
/*
|
||||
* clock_debug_print_enabled_debug_suspend() - Print names of enabled clocks
|
||||
* during suspend.
|
||||
*/
|
||||
static void clock_debug_print_enabled_debug_suspend(struct seq_file *s)
|
||||
{
|
||||
struct clk *c;
|
||||
int cnt = 0;
|
||||
|
||||
if (!mutex_trylock(&clk_list_lock))
|
||||
return;
|
||||
|
||||
clock_debug_output(s, 0, "Enabled clocks:\n");
|
||||
|
||||
list_for_each_entry(c, &clk_list, list) {
|
||||
if (!c || !c->prepare_count)
|
||||
continue;
|
||||
if (c->vdd_class)
|
||||
clock_debug_output(s, 0, " %s:%lu:%lu [%ld, %d]",
|
||||
c->dbg_name, c->prepare_count,
|
||||
c->count, c->rate,
|
||||
find_vdd_level(c, c->rate));
|
||||
else
|
||||
clock_debug_output(s, 0, " %s:%lu:%lu [%ld]",
|
||||
c->dbg_name, c->prepare_count,
|
||||
c->count, c->rate);
|
||||
cnt++;
|
||||
}
|
||||
|
||||
mutex_unlock(&clk_list_lock);
|
||||
|
||||
if (cnt)
|
||||
clock_debug_output(s, 0, "Enabled clock count: %d\n", cnt);
|
||||
else
|
||||
clock_debug_output(s, 0, "No clocks enabled.\n");
|
||||
}
|
||||
|
||||
static int clock_debug_print_clock(struct clk *c, struct seq_file *m)
|
||||
{
|
||||
char *start = "";
|
||||
|
||||
if (!c || !c->prepare_count)
|
||||
return 0;
|
||||
|
||||
clock_debug_output(m, 0, "\t");
|
||||
do {
|
||||
if (c->vdd_class)
|
||||
clock_debug_output(m, 1, "%s%s:%lu:%lu [%ld, %d]",
|
||||
start, c->dbg_name, c->prepare_count, c->count,
|
||||
c->rate, find_vdd_level(c, c->rate));
|
||||
else
|
||||
clock_debug_output(m, 1, "%s%s:%lu:%lu [%ld]", start,
|
||||
c->dbg_name, c->prepare_count, c->count,
|
||||
c->rate);
|
||||
start = " -> ";
|
||||
} while ((c = clk_get_parent(c)));
|
||||
|
||||
clock_debug_output(m, 1, "\n");
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
/**
|
||||
* clock_debug_print_enabled_clocks() - Print names of enabled clocks
|
||||
*
|
||||
*/
|
||||
static void clock_debug_print_enabled_clocks(struct seq_file *m)
|
||||
{
|
||||
struct clk *c;
|
||||
int cnt = 0;
|
||||
|
||||
if (!mutex_trylock(&clk_list_lock)) {
|
||||
pr_err("clock-debug: Clocks are being registered. Cannot print clock state now.\n");
|
||||
return;
|
||||
}
|
||||
clock_debug_output(m, 0, "Enabled clocks:\n");
|
||||
list_for_each_entry(c, &clk_list, list) {
|
||||
cnt += clock_debug_print_clock(c, m);
|
||||
}
|
||||
mutex_unlock(&clk_list_lock);
|
||||
|
||||
if (cnt)
|
||||
clock_debug_output(m, 0, "Enabled clock count: %d\n", cnt);
|
||||
else
|
||||
clock_debug_output(m, 0, "No clocks enabled.\n");
|
||||
}
|
||||
|
||||
static int enabled_clocks_show(struct seq_file *m, void *unused)
|
||||
{
|
||||
clock_debug_print_enabled_clocks(m);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int enabled_clocks_open(struct inode *inode, struct file *file)
|
||||
{
|
||||
return single_open(file, enabled_clocks_show, inode->i_private);
|
||||
}
|
||||
|
||||
static const struct file_operations enabled_clocks_fops = {
|
||||
.open = enabled_clocks_open,
|
||||
.read = seq_read,
|
||||
.llseek = seq_lseek,
|
||||
.release = seq_release,
|
||||
};
|
||||
|
||||
static int trace_clocks_show(struct seq_file *m, void *unused)
|
||||
{
|
||||
struct clk *c;
|
||||
int total_cnt = 0;
|
||||
|
||||
if (!mutex_trylock(&clk_list_lock)) {
|
||||
pr_err("trace_clocks: Clocks are being registered. Cannot trace clock state now.\n");
|
||||
return 1;
|
||||
}
|
||||
list_for_each_entry(c, &clk_list, list) {
|
||||
trace_clock_state(c->dbg_name, c->prepare_count, c->count,
|
||||
c->rate);
|
||||
total_cnt++;
|
||||
}
|
||||
mutex_unlock(&clk_list_lock);
|
||||
clock_debug_output(m, 0, "Total clock count: %d\n", total_cnt);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int trace_clocks_open(struct inode *inode, struct file *file)
|
||||
{
|
||||
return single_open(file, trace_clocks_show, inode->i_private);
|
||||
}
|
||||
static const struct file_operations trace_clocks_fops = {
|
||||
.open = trace_clocks_open,
|
||||
.read = seq_read,
|
||||
.llseek = seq_lseek,
|
||||
.release = seq_release,
|
||||
};
|
||||
|
||||
static int list_rates_show(struct seq_file *m, void *unused)
|
||||
{
|
||||
struct clk *clock = m->private;
|
||||
int level, i = 0;
|
||||
unsigned long rate, fmax = 0;
|
||||
|
||||
/* Find max frequency supported within voltage constraints. */
|
||||
if (!clock->vdd_class) {
|
||||
fmax = ULONG_MAX;
|
||||
} else {
|
||||
for (level = 0; level < clock->num_fmax; level++)
|
||||
if (clock->fmax[level])
|
||||
fmax = clock->fmax[level];
|
||||
}
|
||||
|
||||
/*
|
||||
* List supported frequencies <= fmax. Higher frequencies may appear in
|
||||
* the frequency table, but are not valid and should not be listed.
|
||||
*/
|
||||
while (!IS_ERR_VALUE(rate = clock->ops->list_rate(clock, i++))) {
|
||||
if (rate <= fmax)
|
||||
seq_printf(m, "%lu\n", rate);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int list_rates_open(struct inode *inode, struct file *file)
|
||||
{
|
||||
return single_open(file, list_rates_show, inode->i_private);
|
||||
}
|
||||
|
||||
static const struct file_operations list_rates_fops = {
|
||||
.open = list_rates_open,
|
||||
.read = seq_read,
|
||||
.llseek = seq_lseek,
|
||||
.release = seq_release,
|
||||
};
|
||||
|
||||
static ssize_t clock_parent_read(struct file *filp, char __user *ubuf,
|
||||
size_t cnt, loff_t *ppos)
|
||||
{
|
||||
struct clk *clock = filp->private_data;
|
||||
struct clk *p = clock->parent;
|
||||
char name[256] = {0};
|
||||
|
||||
snprintf(name, sizeof(name), "%s\n", p ? p->dbg_name : "None\n");
|
||||
|
||||
return simple_read_from_buffer(ubuf, cnt, ppos, name, strlen(name));
|
||||
}
|
||||
|
||||
|
||||
static ssize_t clock_parent_write(struct file *filp,
|
||||
const char __user *ubuf, size_t cnt, loff_t *ppos)
|
||||
{
|
||||
struct clk *clock = filp->private_data;
|
||||
char buf[256];
|
||||
char *cmp;
|
||||
int ret;
|
||||
struct clk *parent = NULL;
|
||||
|
||||
cnt = min(cnt, sizeof(buf) - 1);
|
||||
if (copy_from_user(&buf, ubuf, cnt))
|
||||
return -EFAULT;
|
||||
buf[cnt] = '\0';
|
||||
cmp = strstrip(buf);
|
||||
|
||||
mutex_lock(&clk_list_lock);
|
||||
list_for_each_entry(parent, &clk_list, list) {
|
||||
if (!strcmp(cmp, parent->dbg_name))
|
||||
break;
|
||||
}
|
||||
|
||||
if (&parent->list == &clk_list) {
|
||||
ret = -EINVAL;
|
||||
goto err;
|
||||
}
|
||||
|
||||
mutex_unlock(&clk_list_lock);
|
||||
ret = clk_set_parent(clock, parent);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return cnt;
|
||||
err:
|
||||
mutex_unlock(&clk_list_lock);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
||||
static const struct file_operations clock_parent_fops = {
|
||||
.open = simple_open,
|
||||
.read = clock_parent_read,
|
||||
.write = clock_parent_write,
|
||||
};
|
||||
|
||||
void clk_debug_print_hw(struct clk *clk, struct seq_file *f)
|
||||
{
|
||||
void __iomem *base;
|
||||
struct clk_register_data *regs;
|
||||
u32 i, j, size;
|
||||
|
||||
if (IS_ERR_OR_NULL(clk))
|
||||
return;
|
||||
|
||||
clk_debug_print_hw(clk->parent, f);
|
||||
|
||||
clock_debug_output(f, false, "%s\n", clk->dbg_name);
|
||||
|
||||
if (!clk->ops->list_registers)
|
||||
return;
|
||||
|
||||
j = 0;
|
||||
base = clk->ops->list_registers(clk, j, ®s, &size);
|
||||
while (!IS_ERR(base)) {
|
||||
for (i = 0; i < size; i++) {
|
||||
u32 val = readl_relaxed(base + regs[i].offset);
|
||||
|
||||
clock_debug_output(f, false, "%20s: 0x%.8x\n",
|
||||
regs[i].name, val);
|
||||
}
|
||||
j++;
|
||||
base = clk->ops->list_registers(clk, j, ®s, &size);
|
||||
}
|
||||
}
|
||||
|
||||
static int print_hw_show(struct seq_file *m, void *unused)
|
||||
{
|
||||
struct clk *c = m->private;
|
||||
|
||||
clk_debug_print_hw(c, m);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int print_hw_open(struct inode *inode, struct file *file)
|
||||
{
|
||||
return single_open(file, print_hw_show, inode->i_private);
|
||||
}
|
||||
|
||||
static const struct file_operations clock_print_hw_fops = {
|
||||
.open = print_hw_open,
|
||||
.read = seq_read,
|
||||
.llseek = seq_lseek,
|
||||
.release = seq_release,
|
||||
};
|
||||
|
||||
|
||||
static void clock_measure_add(struct clk *clock)
|
||||
{
|
||||
if (IS_ERR_OR_NULL(measure))
|
||||
return;
|
||||
|
||||
if (clk_set_parent(measure, clock))
|
||||
return;
|
||||
|
||||
debugfs_create_file("measure", 0444, clock->clk_dir, clock,
|
||||
&clock_measure_fops);
|
||||
}
|
||||
|
||||
static int clock_debug_add(struct clk *clock)
|
||||
{
|
||||
char temp[50], *ptr;
|
||||
struct dentry *clk_dir;
|
||||
|
||||
if (!debugfs_base)
|
||||
return -ENOMEM;
|
||||
|
||||
strlcpy(temp, clock->dbg_name, ARRAY_SIZE(temp));
|
||||
for (ptr = temp; *ptr; ptr++)
|
||||
*ptr = tolower(*ptr);
|
||||
|
||||
clk_dir = debugfs_create_dir(temp, debugfs_base);
|
||||
if (!clk_dir)
|
||||
return -ENOMEM;
|
||||
|
||||
clock->clk_dir = clk_dir;
|
||||
|
||||
if (!debugfs_create_file("rate", 0644, clk_dir,
|
||||
clock, &clock_rate_fops))
|
||||
goto error;
|
||||
|
||||
if (!debugfs_create_file("enable", 0644, clk_dir,
|
||||
clock, &clock_enable_fops))
|
||||
goto error;
|
||||
|
||||
if (!debugfs_create_file("is_local", 0444, clk_dir, clock,
|
||||
&clock_local_fops))
|
||||
goto error;
|
||||
|
||||
if (!debugfs_create_file("has_hw_gating", 0444, clk_dir, clock,
|
||||
&clock_hwcg_fops))
|
||||
goto error;
|
||||
|
||||
if (clock->ops->list_rate)
|
||||
if (!debugfs_create_file("list_rates",
|
||||
0444, clk_dir, clock, &list_rates_fops))
|
||||
goto error;
|
||||
|
||||
if (clock->vdd_class && !debugfs_create_file(
|
||||
"fmax_rates", 0444, clk_dir, clock, &fmax_rates_fops))
|
||||
goto error;
|
||||
|
||||
if (!debugfs_create_file("parent", 0444, clk_dir, clock,
|
||||
&clock_parent_fops))
|
||||
goto error;
|
||||
|
||||
if (!debugfs_create_file("print", 0444, clk_dir, clock,
|
||||
&clock_print_hw_fops))
|
||||
goto error;
|
||||
|
||||
clock_measure_add(clock);
|
||||
|
||||
return 0;
|
||||
error:
|
||||
debugfs_remove_recursive(clk_dir);
|
||||
return -ENOMEM;
|
||||
}
|
||||
static DEFINE_MUTEX(clk_debug_lock);
|
||||
static int clk_debug_init_once;
|
||||
|
||||
/**
|
||||
* clock_debug_init() - Initialize clock debugfs
|
||||
* Lock clk_debug_lock before invoking this function.
|
||||
*/
|
||||
static int clock_debug_init(void)
|
||||
{
|
||||
if (clk_debug_init_once)
|
||||
return 0;
|
||||
|
||||
clk_debug_init_once = 1;
|
||||
|
||||
debugfs_base = debugfs_create_dir("clk", NULL);
|
||||
if (!debugfs_base)
|
||||
return -ENOMEM;
|
||||
|
||||
if (!debugfs_create_u32("debug_suspend", 0644,
|
||||
debugfs_base, &debug_suspend)) {
|
||||
debugfs_remove_recursive(debugfs_base);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
if (!debugfs_create_file("enabled_clocks", 0444, debugfs_base, NULL,
|
||||
&enabled_clocks_fops))
|
||||
return -ENOMEM;
|
||||
|
||||
if (!debugfs_create_file("orphan_list", 0444, debugfs_base, NULL,
|
||||
&orphan_list_fops))
|
||||
return -ENOMEM;
|
||||
|
||||
if (!debugfs_create_file("trace_clocks", 0444, debugfs_base, NULL,
|
||||
&trace_clocks_fops))
|
||||
return -ENOMEM;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* clock_debug_register() - Add additional clocks to clock debugfs hierarchy
|
||||
* @list: List of clocks to create debugfs nodes for
|
||||
*/
|
||||
int clock_debug_register(struct clk *clk)
|
||||
{
|
||||
int ret = 0;
|
||||
struct clk *c;
|
||||
|
||||
mutex_lock(&clk_list_lock);
|
||||
if (!list_empty(&clk->list))
|
||||
goto out;
|
||||
|
||||
ret = clock_debug_init();
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
if (IS_ERR_OR_NULL(measure)) {
|
||||
if (clk->flags & CLKFLAG_MEASURE)
|
||||
measure = clk;
|
||||
if (!IS_ERR_OR_NULL(measure)) {
|
||||
list_for_each_entry(c, &clk_list, list)
|
||||
clock_measure_add(c);
|
||||
}
|
||||
}
|
||||
|
||||
list_add_tail(&clk->list, &clk_list);
|
||||
clock_debug_add(clk);
|
||||
out:
|
||||
mutex_unlock(&clk_list_lock);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Print the names of enabled clocks and their parents if debug_suspend is set
|
||||
*/
|
||||
void clock_debug_print_enabled(bool print_parent)
|
||||
{
|
||||
if (likely(!debug_suspend))
|
||||
return;
|
||||
if (print_parent)
|
||||
clock_debug_print_enabled_clocks(NULL);
|
||||
else
|
||||
clock_debug_print_enabled_debug_suspend(NULL);
|
||||
|
||||
}
|
||||
113
drivers/clk/msm/clock-dummy.c
Normal file
113
drivers/clk/msm/clock-dummy.c
Normal file
@@ -0,0 +1,113 @@
|
||||
/* Copyright (c) 2011, 2013-2014, The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 and
|
||||
* only version 2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#include <linux/clk/msm-clk-provider.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/of.h>
|
||||
#include <soc/qcom/msm-clock-controller.h>
|
||||
|
||||
static int dummy_clk_reset(struct clk *clk, enum clk_reset_action action)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int dummy_clk_set_rate(struct clk *clk, unsigned long rate)
|
||||
{
|
||||
clk->rate = rate;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int dummy_clk_set_max_rate(struct clk *clk, unsigned long rate)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int dummy_clk_set_flags(struct clk *clk, unsigned long flags)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static unsigned long dummy_clk_get_rate(struct clk *clk)
|
||||
{
|
||||
return clk->rate;
|
||||
}
|
||||
|
||||
static long dummy_clk_round_rate(struct clk *clk, unsigned long rate)
|
||||
{
|
||||
return rate;
|
||||
}
|
||||
|
||||
const struct clk_ops clk_ops_dummy = {
|
||||
.reset = dummy_clk_reset,
|
||||
.set_rate = dummy_clk_set_rate,
|
||||
.set_max_rate = dummy_clk_set_max_rate,
|
||||
.set_flags = dummy_clk_set_flags,
|
||||
.get_rate = dummy_clk_get_rate,
|
||||
.round_rate = dummy_clk_round_rate,
|
||||
};
|
||||
|
||||
struct clk dummy_clk = {
|
||||
.dbg_name = "dummy_clk",
|
||||
.ops = &clk_ops_dummy,
|
||||
CLK_INIT(dummy_clk),
|
||||
};
|
||||
|
||||
static void *dummy_clk_dt_parser(struct device *dev, struct device_node *np)
|
||||
{
|
||||
struct clk *c;
|
||||
|
||||
c = devm_kzalloc(dev, sizeof(*c), GFP_KERNEL);
|
||||
if (!c)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
c->ops = &clk_ops_dummy;
|
||||
return msmclk_generic_clk_init(dev, np, c);
|
||||
}
|
||||
MSMCLK_PARSER(dummy_clk_dt_parser, "qcom,dummy-clk", 0);
|
||||
|
||||
static struct clk *of_dummy_get(struct of_phandle_args *clkspec,
|
||||
void *data)
|
||||
{
|
||||
return &dummy_clk;
|
||||
}
|
||||
|
||||
static const struct of_device_id msm_clock_dummy_match_table[] = {
|
||||
{ .compatible = "qcom,dummycc" },
|
||||
{}
|
||||
};
|
||||
|
||||
static int msm_clock_dummy_probe(struct platform_device *pdev)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = of_clk_add_provider(pdev->dev.of_node, of_dummy_get, NULL);
|
||||
if (ret)
|
||||
return -ENOMEM;
|
||||
|
||||
dev_info(&pdev->dev, "Registered DUMMY provider.\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
static struct platform_driver msm_clock_dummy_driver = {
|
||||
.probe = msm_clock_dummy_probe,
|
||||
.driver = {
|
||||
.name = "clock-dummy",
|
||||
.of_match_table = msm_clock_dummy_match_table,
|
||||
.owner = THIS_MODULE,
|
||||
},
|
||||
};
|
||||
|
||||
int __init msm_dummy_clk_init(void)
|
||||
{
|
||||
return platform_driver_register(&msm_clock_dummy_driver);
|
||||
}
|
||||
arch_initcall(msm_dummy_clk_init);
|
||||
1966
drivers/clk/msm/clock-gcc-9607.c
Normal file
1966
drivers/clk/msm/clock-gcc-9607.c
Normal file
File diff suppressed because it is too large
Load Diff
920
drivers/clk/msm/clock-generic.c
Normal file
920
drivers/clk/msm/clock-generic.c
Normal file
@@ -0,0 +1,920 @@
|
||||
/* Copyright (c) 2013-2017, The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 and
|
||||
* only version 2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/errno.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/clk/msm-clk-provider.h>
|
||||
#include <linux/clk/msm-clock-generic.h>
|
||||
#include <soc/qcom/msm-clock-controller.h>
|
||||
|
||||
/* ==================== Mux clock ==================== */
|
||||
|
||||
static int mux_parent_to_src_sel(struct mux_clk *mux, struct clk *p)
|
||||
{
|
||||
return parent_to_src_sel(mux->parents, mux->num_parents, p);
|
||||
}
|
||||
|
||||
static int mux_set_parent(struct clk *c, struct clk *p)
|
||||
{
|
||||
struct mux_clk *mux = to_mux_clk(c);
|
||||
int sel = mux_parent_to_src_sel(mux, p);
|
||||
struct clk *old_parent;
|
||||
int rc = 0, i;
|
||||
unsigned long flags;
|
||||
|
||||
if (sel < 0 && mux->rec_parents) {
|
||||
for (i = 0; i < mux->num_rec_parents; i++) {
|
||||
rc = clk_set_parent(mux->rec_parents[i], p);
|
||||
if (!rc) {
|
||||
/*
|
||||
* This is necessary to ensure prepare/enable
|
||||
* counts get propagated correctly.
|
||||
*/
|
||||
p = mux->rec_parents[i];
|
||||
sel = mux_parent_to_src_sel(mux, p);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (sel < 0)
|
||||
return sel;
|
||||
|
||||
rc = __clk_pre_reparent(c, p, &flags);
|
||||
if (rc)
|
||||
goto out;
|
||||
|
||||
rc = mux->ops->set_mux_sel(mux, sel);
|
||||
if (rc)
|
||||
goto set_fail;
|
||||
|
||||
old_parent = c->parent;
|
||||
c->parent = p;
|
||||
c->rate = clk_get_rate(p);
|
||||
__clk_post_reparent(c, old_parent, &flags);
|
||||
|
||||
return 0;
|
||||
|
||||
set_fail:
|
||||
__clk_post_reparent(c, p, &flags);
|
||||
out:
|
||||
return rc;
|
||||
}
|
||||
|
||||
static long mux_round_rate(struct clk *c, unsigned long rate)
|
||||
{
|
||||
struct mux_clk *mux = to_mux_clk(c);
|
||||
int i;
|
||||
unsigned long prate, rrate = 0;
|
||||
|
||||
for (i = 0; i < mux->num_parents; i++) {
|
||||
prate = clk_round_rate(mux->parents[i].src, rate);
|
||||
if (is_better_rate(rate, rrate, prate))
|
||||
rrate = prate;
|
||||
}
|
||||
if (!rrate)
|
||||
return -EINVAL;
|
||||
|
||||
return rrate;
|
||||
}
|
||||
|
||||
static int mux_set_rate(struct clk *c, unsigned long rate)
|
||||
{
|
||||
struct mux_clk *mux = to_mux_clk(c);
|
||||
struct clk *new_parent = NULL;
|
||||
int rc = 0, i;
|
||||
unsigned long new_par_curr_rate;
|
||||
unsigned long flags;
|
||||
|
||||
/*
|
||||
* Check if one of the possible parents is already at the requested
|
||||
* rate.
|
||||
*/
|
||||
for (i = 0; i < mux->num_parents && mux->try_get_rate; i++) {
|
||||
struct clk *p = mux->parents[i].src;
|
||||
|
||||
if (p->rate == rate && clk_round_rate(p, rate) == rate) {
|
||||
new_parent = mux->parents[i].src;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
for (i = 0; i < mux->num_parents && !(!i && new_parent); i++) {
|
||||
if (clk_round_rate(mux->parents[i].src, rate) == rate) {
|
||||
new_parent = mux->parents[i].src;
|
||||
if (!mux->try_new_parent)
|
||||
break;
|
||||
if (mux->try_new_parent && new_parent != c->parent)
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (new_parent == NULL)
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* Switch to safe parent since the old and new parent might be the
|
||||
* same and the parent might temporarily turn off while switching
|
||||
* rates. If the mux can switch between distinct sources safely
|
||||
* (indicated by try_new_parent), and the new source is not the current
|
||||
* parent, do not switch to the safe parent.
|
||||
*/
|
||||
if (mux->safe_sel >= 0 &&
|
||||
!(mux->try_new_parent && (new_parent != c->parent))) {
|
||||
/*
|
||||
* The safe parent might be a clock with multiple sources;
|
||||
* to select the "safe" source, set a safe frequency.
|
||||
*/
|
||||
if (mux->safe_freq) {
|
||||
rc = clk_set_rate(mux->safe_parent, mux->safe_freq);
|
||||
if (rc) {
|
||||
pr_err("Failed to set safe rate on %s\n",
|
||||
clk_name(mux->safe_parent));
|
||||
return rc;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Some mux implementations might switch to/from a low power
|
||||
* parent as part of their disable/enable ops. Grab the
|
||||
* enable lock to avoid racing with these implementations.
|
||||
*/
|
||||
spin_lock_irqsave(&c->lock, flags);
|
||||
rc = mux->ops->set_mux_sel(mux, mux->safe_sel);
|
||||
spin_unlock_irqrestore(&c->lock, flags);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
}
|
||||
|
||||
new_par_curr_rate = clk_get_rate(new_parent);
|
||||
rc = clk_set_rate(new_parent, rate);
|
||||
if (rc)
|
||||
goto set_rate_fail;
|
||||
|
||||
rc = mux_set_parent(c, new_parent);
|
||||
if (rc)
|
||||
goto set_par_fail;
|
||||
|
||||
return 0;
|
||||
|
||||
set_par_fail:
|
||||
clk_set_rate(new_parent, new_par_curr_rate);
|
||||
set_rate_fail:
|
||||
WARN(mux->ops->set_mux_sel(mux,
|
||||
mux_parent_to_src_sel(mux, c->parent)),
|
||||
"Set rate failed for %s. Also in bad state!\n", c->dbg_name);
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int mux_enable(struct clk *c)
|
||||
{
|
||||
struct mux_clk *mux = to_mux_clk(c);
|
||||
|
||||
if (mux->ops->enable)
|
||||
return mux->ops->enable(mux);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void mux_disable(struct clk *c)
|
||||
{
|
||||
struct mux_clk *mux = to_mux_clk(c);
|
||||
|
||||
if (mux->ops->disable)
|
||||
return mux->ops->disable(mux);
|
||||
}
|
||||
|
||||
static struct clk *mux_get_parent(struct clk *c)
|
||||
{
|
||||
struct mux_clk *mux = to_mux_clk(c);
|
||||
int sel = mux->ops->get_mux_sel(mux);
|
||||
int i;
|
||||
|
||||
for (i = 0; i < mux->num_parents; i++) {
|
||||
if (mux->parents[i].sel == sel)
|
||||
return mux->parents[i].src;
|
||||
}
|
||||
|
||||
/* Unfamiliar parent. */
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static enum handoff mux_handoff(struct clk *c)
|
||||
{
|
||||
struct mux_clk *mux = to_mux_clk(c);
|
||||
|
||||
c->rate = clk_get_rate(c->parent);
|
||||
mux->safe_sel = mux_parent_to_src_sel(mux, mux->safe_parent);
|
||||
|
||||
if (mux->en_mask && mux->ops && mux->ops->is_enabled)
|
||||
return mux->ops->is_enabled(mux)
|
||||
? HANDOFF_ENABLED_CLK
|
||||
: HANDOFF_DISABLED_CLK;
|
||||
|
||||
/*
|
||||
* If this function returns 'enabled' even when the clock downstream
|
||||
* of this clock is disabled, then handoff code will unnecessarily
|
||||
* enable the current parent of this clock. If this function always
|
||||
* returns 'disabled' and a clock downstream is on, the clock handoff
|
||||
* code will bump up the ref count for this clock and its current
|
||||
* parent as necessary. So, clocks without an actual HW gate can
|
||||
* always return disabled.
|
||||
*/
|
||||
return HANDOFF_DISABLED_CLK;
|
||||
}
|
||||
|
||||
static void __iomem *mux_clk_list_registers(struct clk *c, int n,
|
||||
struct clk_register_data **regs, u32 *size)
|
||||
{
|
||||
struct mux_clk *mux = to_mux_clk(c);
|
||||
|
||||
if (mux->ops && mux->ops->list_registers)
|
||||
return mux->ops->list_registers(mux, n, regs, size);
|
||||
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
const struct clk_ops clk_ops_gen_mux = {
|
||||
.enable = mux_enable,
|
||||
.disable = mux_disable,
|
||||
.set_parent = mux_set_parent,
|
||||
.round_rate = mux_round_rate,
|
||||
.set_rate = mux_set_rate,
|
||||
.handoff = mux_handoff,
|
||||
.get_parent = mux_get_parent,
|
||||
.list_registers = mux_clk_list_registers,
|
||||
};
|
||||
|
||||
/* ==================== Divider clock ==================== */
|
||||
|
||||
static long __div_round_rate(struct div_data *data, unsigned long rate,
|
||||
struct clk *parent, unsigned int *best_div, unsigned long *best_prate)
|
||||
{
|
||||
unsigned int div, min_div, max_div, _best_div = 1;
|
||||
unsigned long prate, _best_prate = 0, rrate = 0, req_prate, actual_rate;
|
||||
unsigned int numer;
|
||||
|
||||
rate = max(rate, 1UL);
|
||||
|
||||
min_div = max(data->min_div, 1U);
|
||||
max_div = min(data->max_div, (unsigned int) (ULONG_MAX));
|
||||
|
||||
/*
|
||||
* div values are doubled for half dividers.
|
||||
* Adjust for that by picking a numer of 2.
|
||||
*/
|
||||
numer = data->is_half_divider ? 2 : 1;
|
||||
|
||||
for (div = min_div; div <= max_div; div++) {
|
||||
if (data->skip_odd_div && (div & 1))
|
||||
if (!(data->allow_div_one && (div == 1)))
|
||||
continue;
|
||||
if (data->skip_even_div && !(div & 1))
|
||||
continue;
|
||||
req_prate = mult_frac(rate, div, numer);
|
||||
prate = clk_round_rate(parent, req_prate);
|
||||
if (IS_ERR_VALUE(prate))
|
||||
break;
|
||||
|
||||
actual_rate = mult_frac(prate, numer, div);
|
||||
if (is_better_rate(rate, rrate, actual_rate)) {
|
||||
rrate = actual_rate;
|
||||
_best_div = div;
|
||||
_best_prate = prate;
|
||||
}
|
||||
|
||||
/*
|
||||
* Trying higher dividers is only going to ask the parent for
|
||||
* a higher rate. If it can't even output a rate higher than
|
||||
* the one we request for this divider, the parent is not
|
||||
* going to be able to output an even higher rate required
|
||||
* for a higher divider. So, stop trying higher dividers.
|
||||
*/
|
||||
if (actual_rate < rate)
|
||||
break;
|
||||
|
||||
if (rrate <= rate + data->rate_margin)
|
||||
break;
|
||||
}
|
||||
|
||||
if (!rrate)
|
||||
return -EINVAL;
|
||||
if (best_div)
|
||||
*best_div = _best_div;
|
||||
if (best_prate)
|
||||
*best_prate = _best_prate;
|
||||
|
||||
return rrate;
|
||||
}
|
||||
|
||||
static long div_round_rate(struct clk *c, unsigned long rate)
|
||||
{
|
||||
struct div_clk *d = to_div_clk(c);
|
||||
|
||||
return __div_round_rate(&d->data, rate, c->parent, NULL, NULL);
|
||||
}
|
||||
|
||||
static int _find_safe_div(struct clk *c, unsigned long rate)
|
||||
{
|
||||
struct div_clk *d = to_div_clk(c);
|
||||
struct div_data *data = &d->data;
|
||||
unsigned long fast = max(rate, c->rate);
|
||||
unsigned int numer = data->is_half_divider ? 2 : 1;
|
||||
int i, safe_div = 0;
|
||||
|
||||
if (!d->safe_freq)
|
||||
return 0;
|
||||
|
||||
/* Find the max safe freq that is lesser than fast */
|
||||
for (i = data->max_div; i >= data->min_div; i--)
|
||||
if (mult_frac(d->safe_freq, numer, i) <= fast)
|
||||
safe_div = i;
|
||||
|
||||
return safe_div ?: -EINVAL;
|
||||
}
|
||||
|
||||
static int div_set_rate(struct clk *c, unsigned long rate)
|
||||
{
|
||||
struct div_clk *d = to_div_clk(c);
|
||||
int safe_div, div, rc = 0;
|
||||
long rrate, old_prate, new_prate;
|
||||
struct div_data *data = &d->data;
|
||||
|
||||
rrate = __div_round_rate(data, rate, c->parent, &div, &new_prate);
|
||||
if (rrate < rate || rrate > rate + data->rate_margin)
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* For fixed divider clock we don't want to return an error if the
|
||||
* requested rate matches the achievable rate. So, don't check for
|
||||
* !d->ops and return an error. __div_round_rate() ensures div ==
|
||||
* d->div if !d->ops.
|
||||
*/
|
||||
|
||||
safe_div = _find_safe_div(c, rate);
|
||||
if (d->safe_freq && safe_div < 0) {
|
||||
pr_err("No safe div on %s for transitioning from %lu to %lu\n",
|
||||
c->dbg_name, c->rate, rate);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
safe_div = max(safe_div, div);
|
||||
|
||||
if (safe_div > data->div) {
|
||||
rc = d->ops->set_div(d, safe_div);
|
||||
if (rc) {
|
||||
pr_err("Failed to set div %d on %s\n", safe_div,
|
||||
c->dbg_name);
|
||||
return rc;
|
||||
}
|
||||
}
|
||||
|
||||
old_prate = clk_get_rate(c->parent);
|
||||
rc = clk_set_rate(c->parent, new_prate);
|
||||
if (rc)
|
||||
goto set_rate_fail;
|
||||
|
||||
if (div < data->div)
|
||||
rc = d->ops->set_div(d, div);
|
||||
else if (div < safe_div)
|
||||
rc = d->ops->set_div(d, div);
|
||||
if (rc)
|
||||
goto div_dec_fail;
|
||||
|
||||
data->div = div;
|
||||
|
||||
return 0;
|
||||
|
||||
div_dec_fail:
|
||||
WARN(clk_set_rate(c->parent, old_prate),
|
||||
"Set rate failed for %s. Also in bad state!\n", c->dbg_name);
|
||||
set_rate_fail:
|
||||
if (safe_div > data->div)
|
||||
WARN(d->ops->set_div(d, data->div),
|
||||
"Set rate failed for %s. Also in bad state!\n",
|
||||
c->dbg_name);
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int div_enable(struct clk *c)
|
||||
{
|
||||
struct div_clk *d = to_div_clk(c);
|
||||
|
||||
if (d->ops && d->ops->enable)
|
||||
return d->ops->enable(d);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void div_disable(struct clk *c)
|
||||
{
|
||||
struct div_clk *d = to_div_clk(c);
|
||||
|
||||
if (d->ops && d->ops->disable)
|
||||
return d->ops->disable(d);
|
||||
}
|
||||
|
||||
static enum handoff div_handoff(struct clk *c)
|
||||
{
|
||||
struct div_clk *d = to_div_clk(c);
|
||||
unsigned int div = d->data.div;
|
||||
|
||||
if (d->ops && d->ops->get_div)
|
||||
div = max(d->ops->get_div(d), 1);
|
||||
div = max(div, 1U);
|
||||
c->rate = clk_get_rate(c->parent) / div;
|
||||
|
||||
if (!d->ops || !d->ops->set_div)
|
||||
d->data.min_div = d->data.max_div = div;
|
||||
d->data.div = div;
|
||||
|
||||
if (d->en_mask && d->ops && d->ops->is_enabled)
|
||||
return d->ops->is_enabled(d)
|
||||
? HANDOFF_ENABLED_CLK
|
||||
: HANDOFF_DISABLED_CLK;
|
||||
|
||||
/*
|
||||
* If this function returns 'enabled' even when the clock downstream
|
||||
* of this clock is disabled, then handoff code will unnecessarily
|
||||
* enable the current parent of this clock. If this function always
|
||||
* returns 'disabled' and a clock downstream is on, the clock handoff
|
||||
* code will bump up the ref count for this clock and its current
|
||||
* parent as necessary. So, clocks without an actual HW gate can
|
||||
* always return disabled.
|
||||
*/
|
||||
return HANDOFF_DISABLED_CLK;
|
||||
}
|
||||
|
||||
static void __iomem *div_clk_list_registers(struct clk *c, int n,
|
||||
struct clk_register_data **regs, u32 *size)
|
||||
{
|
||||
struct div_clk *d = to_div_clk(c);
|
||||
|
||||
if (d->ops && d->ops->list_registers)
|
||||
return d->ops->list_registers(d, n, regs, size);
|
||||
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
const struct clk_ops clk_ops_div = {
|
||||
.enable = div_enable,
|
||||
.disable = div_disable,
|
||||
.round_rate = div_round_rate,
|
||||
.set_rate = div_set_rate,
|
||||
.handoff = div_handoff,
|
||||
.list_registers = div_clk_list_registers,
|
||||
};
|
||||
|
||||
static long __slave_div_round_rate(struct clk *c, unsigned long rate,
|
||||
int *best_div)
|
||||
{
|
||||
struct div_clk *d = to_div_clk(c);
|
||||
unsigned int div, min_div, max_div;
|
||||
long p_rate;
|
||||
|
||||
rate = max(rate, 1UL);
|
||||
|
||||
min_div = d->data.min_div;
|
||||
max_div = d->data.max_div;
|
||||
|
||||
p_rate = clk_get_rate(c->parent);
|
||||
div = DIV_ROUND_CLOSEST(p_rate, rate);
|
||||
div = max(div, min_div);
|
||||
div = min(div, max_div);
|
||||
if (best_div)
|
||||
*best_div = div;
|
||||
|
||||
return p_rate / div;
|
||||
}
|
||||
|
||||
static long slave_div_round_rate(struct clk *c, unsigned long rate)
|
||||
{
|
||||
return __slave_div_round_rate(c, rate, NULL);
|
||||
}
|
||||
|
||||
static int slave_div_set_rate(struct clk *c, unsigned long rate)
|
||||
{
|
||||
struct div_clk *d = to_div_clk(c);
|
||||
int div, rc = 0;
|
||||
long rrate;
|
||||
|
||||
rrate = __slave_div_round_rate(c, rate, &div);
|
||||
if (rrate != rate)
|
||||
return -EINVAL;
|
||||
|
||||
if (div == d->data.div)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* For fixed divider clock we don't want to return an error if the
|
||||
* requested rate matches the achievable rate. So, don't check for
|
||||
* !d->ops and return an error. __slave_div_round_rate() ensures
|
||||
* div == d->data.div if !d->ops.
|
||||
*/
|
||||
rc = d->ops->set_div(d, div);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
d->data.div = div;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static unsigned long slave_div_get_rate(struct clk *c)
|
||||
{
|
||||
struct div_clk *d = to_div_clk(c);
|
||||
|
||||
if (!d->data.div)
|
||||
return 0;
|
||||
return clk_get_rate(c->parent) / d->data.div;
|
||||
}
|
||||
|
||||
const struct clk_ops clk_ops_slave_div = {
|
||||
.enable = div_enable,
|
||||
.disable = div_disable,
|
||||
.round_rate = slave_div_round_rate,
|
||||
.set_rate = slave_div_set_rate,
|
||||
.get_rate = slave_div_get_rate,
|
||||
.handoff = div_handoff,
|
||||
.list_registers = div_clk_list_registers,
|
||||
};
|
||||
|
||||
|
||||
/**
|
||||
* External clock
|
||||
* Some clock controllers have input clock signal that come from outside the
|
||||
* clock controller. That input clock signal might then be used as a source for
|
||||
* several clocks inside the clock controller. This external clock
|
||||
* implementation models this input clock signal by just passing on the requests
|
||||
* to the clock's parent, the original external clock source. The driver for the
|
||||
* clock controller should clk_get() the original external clock in the probe
|
||||
* function and set is as a parent to this external clock..
|
||||
*/
|
||||
|
||||
long parent_round_rate(struct clk *c, unsigned long rate)
|
||||
{
|
||||
return clk_round_rate(c->parent, rate);
|
||||
}
|
||||
|
||||
int parent_set_rate(struct clk *c, unsigned long rate)
|
||||
{
|
||||
return clk_set_rate(c->parent, rate);
|
||||
}
|
||||
|
||||
unsigned long parent_get_rate(struct clk *c)
|
||||
{
|
||||
return clk_get_rate(c->parent);
|
||||
}
|
||||
|
||||
static int ext_set_parent(struct clk *c, struct clk *p)
|
||||
{
|
||||
return clk_set_parent(c->parent, p);
|
||||
}
|
||||
|
||||
static struct clk *ext_get_parent(struct clk *c)
|
||||
{
|
||||
struct ext_clk *ext = to_ext_clk(c);
|
||||
|
||||
if (!IS_ERR_OR_NULL(c->parent))
|
||||
return c->parent;
|
||||
return clk_get(ext->dev, ext->clk_id);
|
||||
}
|
||||
|
||||
static enum handoff ext_handoff(struct clk *c)
|
||||
{
|
||||
c->rate = clk_get_rate(c->parent);
|
||||
/* Similar reasoning applied in div_handoff, see comment there. */
|
||||
return HANDOFF_DISABLED_CLK;
|
||||
}
|
||||
|
||||
const struct clk_ops clk_ops_ext = {
|
||||
.handoff = ext_handoff,
|
||||
.round_rate = parent_round_rate,
|
||||
.set_rate = parent_set_rate,
|
||||
.get_rate = parent_get_rate,
|
||||
.set_parent = ext_set_parent,
|
||||
.get_parent = ext_get_parent,
|
||||
};
|
||||
|
||||
static void *ext_clk_dt_parser(struct device *dev, struct device_node *np)
|
||||
{
|
||||
struct ext_clk *ext;
|
||||
const char *str;
|
||||
int rc;
|
||||
|
||||
ext = devm_kzalloc(dev, sizeof(*ext), GFP_KERNEL);
|
||||
if (!ext)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
ext->dev = dev;
|
||||
rc = of_property_read_string(np, "qcom,clock-names", &str);
|
||||
if (!rc)
|
||||
ext->clk_id = (void *)str;
|
||||
|
||||
ext->c.ops = &clk_ops_ext;
|
||||
return msmclk_generic_clk_init(dev, np, &ext->c);
|
||||
}
|
||||
MSMCLK_PARSER(ext_clk_dt_parser, "qcom,ext-clk", 0);
|
||||
|
||||
/* ==================== Mux_div clock ==================== */
|
||||
|
||||
static int mux_div_clk_enable(struct clk *c)
|
||||
{
|
||||
struct mux_div_clk *md = to_mux_div_clk(c);
|
||||
|
||||
if (md->ops->enable)
|
||||
return md->ops->enable(md);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void mux_div_clk_disable(struct clk *c)
|
||||
{
|
||||
struct mux_div_clk *md = to_mux_div_clk(c);
|
||||
|
||||
if (md->ops->disable)
|
||||
return md->ops->disable(md);
|
||||
}
|
||||
|
||||
static long __mux_div_round_rate(struct clk *c, unsigned long rate,
|
||||
struct clk **best_parent, int *best_div, unsigned long *best_prate)
|
||||
{
|
||||
struct mux_div_clk *md = to_mux_div_clk(c);
|
||||
unsigned int i;
|
||||
unsigned long rrate, best = 0, _best_div = 0, _best_prate = 0;
|
||||
struct clk *_best_parent = 0;
|
||||
|
||||
if (md->try_get_rate) {
|
||||
for (i = 0; i < md->num_parents; i++) {
|
||||
int divider;
|
||||
unsigned long p_rate;
|
||||
|
||||
rrate = __div_round_rate(&md->data, rate,
|
||||
md->parents[i].src,
|
||||
÷r, &p_rate);
|
||||
/*
|
||||
* Check if one of the possible parents is already at
|
||||
* the requested rate.
|
||||
*/
|
||||
if (p_rate == clk_get_rate(md->parents[i].src)
|
||||
&& rrate == rate) {
|
||||
best = rrate;
|
||||
_best_div = divider;
|
||||
_best_prate = p_rate;
|
||||
_best_parent = md->parents[i].src;
|
||||
goto end;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for (i = 0; i < md->num_parents; i++) {
|
||||
int div;
|
||||
unsigned long prate;
|
||||
|
||||
rrate = __div_round_rate(&md->data, rate, md->parents[i].src,
|
||||
&div, &prate);
|
||||
|
||||
if (is_better_rate(rate, best, rrate)) {
|
||||
best = rrate;
|
||||
_best_div = div;
|
||||
_best_prate = prate;
|
||||
_best_parent = md->parents[i].src;
|
||||
}
|
||||
|
||||
if (rate <= rrate && rrate <= rate + md->data.rate_margin)
|
||||
break;
|
||||
}
|
||||
end:
|
||||
if (best_div)
|
||||
*best_div = _best_div;
|
||||
if (best_prate)
|
||||
*best_prate = _best_prate;
|
||||
if (best_parent)
|
||||
*best_parent = _best_parent;
|
||||
|
||||
if (best)
|
||||
return best;
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static long mux_div_clk_round_rate(struct clk *c, unsigned long rate)
|
||||
{
|
||||
return __mux_div_round_rate(c, rate, NULL, NULL, NULL);
|
||||
}
|
||||
|
||||
/* requires enable lock to be held */
|
||||
static int __set_src_div(struct mux_div_clk *md, struct clk *parent, u32 div)
|
||||
{
|
||||
u32 rc = 0, src_sel;
|
||||
|
||||
src_sel = parent_to_src_sel(md->parents, md->num_parents, parent);
|
||||
/*
|
||||
* If the clock is disabled, don't change to the new settings until
|
||||
* the clock is reenabled
|
||||
*/
|
||||
if (md->c.count)
|
||||
rc = md->ops->set_src_div(md, src_sel, div);
|
||||
if (!rc) {
|
||||
md->data.div = div;
|
||||
md->src_sel = src_sel;
|
||||
}
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int set_src_div(struct mux_div_clk *md, struct clk *parent, u32 div)
|
||||
{
|
||||
unsigned long flags;
|
||||
u32 rc;
|
||||
|
||||
spin_lock_irqsave(&md->c.lock, flags);
|
||||
rc = __set_src_div(md, parent, div);
|
||||
spin_unlock_irqrestore(&md->c.lock, flags);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
/* Must be called after handoff to ensure parent clock rates are initialized */
|
||||
static int safe_parent_init_once(struct clk *c)
|
||||
{
|
||||
unsigned long rrate;
|
||||
u32 best_div;
|
||||
struct clk *best_parent;
|
||||
struct mux_div_clk *md = to_mux_div_clk(c);
|
||||
|
||||
if (IS_ERR(md->safe_parent))
|
||||
return -EINVAL;
|
||||
if (!md->safe_freq || md->safe_parent)
|
||||
return 0;
|
||||
|
||||
rrate = __mux_div_round_rate(c, md->safe_freq, &best_parent,
|
||||
&best_div, NULL);
|
||||
|
||||
if (rrate == md->safe_freq) {
|
||||
md->safe_div = best_div;
|
||||
md->safe_parent = best_parent;
|
||||
} else {
|
||||
md->safe_parent = ERR_PTR(-EINVAL);
|
||||
return -EINVAL;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int mux_div_clk_set_rate(struct clk *c, unsigned long rate)
|
||||
{
|
||||
struct mux_div_clk *md = to_mux_div_clk(c);
|
||||
unsigned long flags, rrate;
|
||||
unsigned long new_prate, new_parent_orig_rate;
|
||||
struct clk *old_parent, *new_parent;
|
||||
u32 new_div, old_div;
|
||||
int rc;
|
||||
|
||||
rc = safe_parent_init_once(c);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
rrate = __mux_div_round_rate(c, rate, &new_parent, &new_div,
|
||||
&new_prate);
|
||||
if (rrate < rate || rrate > rate + md->data.rate_margin)
|
||||
return -EINVAL;
|
||||
|
||||
old_parent = c->parent;
|
||||
old_div = md->data.div;
|
||||
|
||||
/* Refer to the description of safe_freq in clock-generic.h */
|
||||
if (md->safe_freq)
|
||||
rc = set_src_div(md, md->safe_parent, md->safe_div);
|
||||
|
||||
else if (new_parent == old_parent && new_div >= old_div) {
|
||||
/*
|
||||
* If both the parent_rate and divider changes, there may be an
|
||||
* intermediate frequency generated. Ensure this intermediate
|
||||
* frequency is less than both the new rate and previous rate.
|
||||
*/
|
||||
rc = set_src_div(md, old_parent, new_div);
|
||||
}
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
new_parent_orig_rate = clk_get_rate(new_parent);
|
||||
rc = clk_set_rate(new_parent, new_prate);
|
||||
if (rc) {
|
||||
pr_err("failed to set %s to %ld\n",
|
||||
clk_name(new_parent), new_prate);
|
||||
goto err_set_rate;
|
||||
}
|
||||
|
||||
rc = __clk_pre_reparent(c, new_parent, &flags);
|
||||
if (rc)
|
||||
goto err_pre_reparent;
|
||||
|
||||
/* Set divider and mux src atomically */
|
||||
rc = __set_src_div(md, new_parent, new_div);
|
||||
if (rc)
|
||||
goto err_set_src_div;
|
||||
|
||||
c->parent = new_parent;
|
||||
|
||||
__clk_post_reparent(c, old_parent, &flags);
|
||||
return 0;
|
||||
|
||||
err_set_src_div:
|
||||
/* Not switching to new_parent, so disable it */
|
||||
__clk_post_reparent(c, new_parent, &flags);
|
||||
err_pre_reparent:
|
||||
rc = clk_set_rate(new_parent, new_parent_orig_rate);
|
||||
WARN(rc, "%s: error changing new_parent (%s) rate back to %ld\n",
|
||||
clk_name(c), clk_name(new_parent), new_parent_orig_rate);
|
||||
err_set_rate:
|
||||
rc = set_src_div(md, old_parent, old_div);
|
||||
WARN(rc, "%s: error changing back to original div (%d) and parent (%s)\n",
|
||||
clk_name(c), old_div, clk_name(old_parent));
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
static struct clk *mux_div_clk_get_parent(struct clk *c)
|
||||
{
|
||||
struct mux_div_clk *md = to_mux_div_clk(c);
|
||||
u32 i, div, src_sel;
|
||||
|
||||
md->ops->get_src_div(md, &src_sel, &div);
|
||||
|
||||
md->data.div = div;
|
||||
md->src_sel = src_sel;
|
||||
|
||||
for (i = 0; i < md->num_parents; i++) {
|
||||
if (md->parents[i].sel == src_sel)
|
||||
return md->parents[i].src;
|
||||
}
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static enum handoff mux_div_clk_handoff(struct clk *c)
|
||||
{
|
||||
struct mux_div_clk *md = to_mux_div_clk(c);
|
||||
unsigned long parent_rate;
|
||||
unsigned int numer;
|
||||
|
||||
parent_rate = clk_get_rate(c->parent);
|
||||
/*
|
||||
* div values are doubled for half dividers.
|
||||
* Adjust for that by picking a numer of 2.
|
||||
*/
|
||||
numer = md->data.is_half_divider ? 2 : 1;
|
||||
|
||||
if (md->data.div) {
|
||||
c->rate = mult_frac(parent_rate, numer, md->data.div);
|
||||
} else {
|
||||
c->rate = 0;
|
||||
return HANDOFF_DISABLED_CLK;
|
||||
}
|
||||
|
||||
if (md->en_mask && md->ops && md->ops->is_enabled)
|
||||
return md->ops->is_enabled(md)
|
||||
? HANDOFF_ENABLED_CLK
|
||||
: HANDOFF_DISABLED_CLK;
|
||||
|
||||
/*
|
||||
* If this function returns 'enabled' even when the clock downstream
|
||||
* of this clock is disabled, then handoff code will unnecessarily
|
||||
* enable the current parent of this clock. If this function always
|
||||
* returns 'disabled' and a clock downstream is on, the clock handoff
|
||||
* code will bump up the ref count for this clock and its current
|
||||
* parent as necessary. So, clocks without an actual HW gate can
|
||||
* always return disabled.
|
||||
*/
|
||||
return HANDOFF_DISABLED_CLK;
|
||||
}
|
||||
|
||||
static void __iomem *mux_div_clk_list_registers(struct clk *c, int n,
|
||||
struct clk_register_data **regs, u32 *size)
|
||||
{
|
||||
struct mux_div_clk *md = to_mux_div_clk(c);
|
||||
|
||||
if (md->ops && md->ops->list_registers)
|
||||
return md->ops->list_registers(md, n, regs, size);
|
||||
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
const struct clk_ops clk_ops_mux_div_clk = {
|
||||
.enable = mux_div_clk_enable,
|
||||
.disable = mux_div_clk_disable,
|
||||
.set_rate = mux_div_clk_set_rate,
|
||||
.round_rate = mux_div_clk_round_rate,
|
||||
.get_parent = mux_div_clk_get_parent,
|
||||
.handoff = mux_div_clk_handoff,
|
||||
.list_registers = mux_div_clk_list_registers,
|
||||
};
|
||||
2906
drivers/clk/msm/clock-local2.c
Normal file
2906
drivers/clk/msm/clock-local2.c
Normal file
File diff suppressed because it is too large
Load Diff
1279
drivers/clk/msm/clock-pll.c
Normal file
1279
drivers/clk/msm/clock-pll.c
Normal file
File diff suppressed because it is too large
Load Diff
472
drivers/clk/msm/clock-rpm.c
Normal file
472
drivers/clk/msm/clock-rpm.c
Normal file
@@ -0,0 +1,472 @@
|
||||
/* Copyright (c) 2010-2015, 2017, The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 and
|
||||
* only version 2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#define pr_fmt(fmt) "%s: " fmt, __func__
|
||||
|
||||
#include <linux/err.h>
|
||||
#include <linux/rtmutex.h>
|
||||
#include <linux/clk/msm-clk-provider.h>
|
||||
#include <soc/qcom/clock-rpm.h>
|
||||
#include <soc/qcom/msm-clock-controller.h>
|
||||
|
||||
#define __clk_rpmrs_set_rate(r, value, ctx) \
|
||||
((r)->rpmrs_data->set_rate_fn((r), (value), (ctx)))
|
||||
|
||||
#define clk_rpmrs_set_rate_sleep(r, value) \
|
||||
__clk_rpmrs_set_rate((r), (value), (r)->rpmrs_data->ctx_sleep_id)
|
||||
|
||||
#define clk_rpmrs_set_rate_active(r, value) \
|
||||
__clk_rpmrs_set_rate((r), (value), (r)->rpmrs_data->ctx_active_id)
|
||||
|
||||
static int clk_rpmrs_set_rate_smd(struct rpm_clk *r, uint32_t value,
|
||||
uint32_t context)
|
||||
{
|
||||
int ret;
|
||||
|
||||
struct msm_rpm_kvp kvp = {
|
||||
.key = r->rpm_key,
|
||||
.data = (void *)&value,
|
||||
.length = sizeof(value),
|
||||
};
|
||||
|
||||
switch (context) {
|
||||
case MSM_RPM_CTX_ACTIVE_SET:
|
||||
if (*r->last_active_set_vote == value)
|
||||
return 0;
|
||||
break;
|
||||
case MSM_RPM_CTX_SLEEP_SET:
|
||||
if (*r->last_sleep_set_vote == value)
|
||||
return 0;
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
};
|
||||
|
||||
ret = msm_rpm_send_message(context, r->rpm_res_type, r->rpm_clk_id,
|
||||
&kvp, 1);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
switch (context) {
|
||||
case MSM_RPM_CTX_ACTIVE_SET:
|
||||
*r->last_active_set_vote = value;
|
||||
break;
|
||||
case MSM_RPM_CTX_SLEEP_SET:
|
||||
*r->last_sleep_set_vote = value;
|
||||
break;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int clk_rpmrs_handoff_smd(struct rpm_clk *r)
|
||||
{
|
||||
if (!r->branch)
|
||||
r->c.rate = INT_MAX;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int clk_rpmrs_is_enabled_smd(struct rpm_clk *r)
|
||||
{
|
||||
return !!r->c.prepare_count;
|
||||
}
|
||||
|
||||
struct clk_rpmrs_data {
|
||||
int (*set_rate_fn)(struct rpm_clk *r, uint32_t value, uint32_t context);
|
||||
int (*get_rate_fn)(struct rpm_clk *r);
|
||||
int (*handoff_fn)(struct rpm_clk *r);
|
||||
int (*is_enabled)(struct rpm_clk *r);
|
||||
int ctx_active_id;
|
||||
int ctx_sleep_id;
|
||||
};
|
||||
|
||||
struct clk_rpmrs_data clk_rpmrs_data_smd = {
|
||||
.set_rate_fn = clk_rpmrs_set_rate_smd,
|
||||
.handoff_fn = clk_rpmrs_handoff_smd,
|
||||
.is_enabled = clk_rpmrs_is_enabled_smd,
|
||||
.ctx_active_id = MSM_RPM_CTX_ACTIVE_SET,
|
||||
.ctx_sleep_id = MSM_RPM_CTX_SLEEP_SET,
|
||||
};
|
||||
|
||||
static DEFINE_RT_MUTEX(rpm_clock_lock);
|
||||
|
||||
static void to_active_sleep_khz(struct rpm_clk *r, unsigned long rate,
|
||||
unsigned long *active_khz, unsigned long *sleep_khz)
|
||||
{
|
||||
/* Convert the rate (hz) to khz */
|
||||
*active_khz = DIV_ROUND_UP(rate, 1000);
|
||||
|
||||
/*
|
||||
* Active-only clocks don't care what the rate is during sleep. So,
|
||||
* they vote for zero.
|
||||
*/
|
||||
if (r->active_only)
|
||||
*sleep_khz = 0;
|
||||
else
|
||||
*sleep_khz = *active_khz;
|
||||
}
|
||||
|
||||
static int rpm_clk_prepare(struct clk *clk)
|
||||
{
|
||||
struct rpm_clk *r = to_rpm_clk(clk);
|
||||
uint32_t value;
|
||||
int rc = 0;
|
||||
unsigned long this_khz, this_sleep_khz;
|
||||
unsigned long peer_khz = 0, peer_sleep_khz = 0;
|
||||
struct rpm_clk *peer = r->peer;
|
||||
|
||||
rt_mutex_lock(&rpm_clock_lock);
|
||||
|
||||
to_active_sleep_khz(r, r->c.rate, &this_khz, &this_sleep_khz);
|
||||
|
||||
/* Don't send requests to the RPM if the rate has not been set. */
|
||||
if (this_khz == 0)
|
||||
goto out;
|
||||
|
||||
/* Take peer clock's rate into account only if it's enabled. */
|
||||
if (peer->enabled)
|
||||
to_active_sleep_khz(peer, peer->c.rate,
|
||||
&peer_khz, &peer_sleep_khz);
|
||||
|
||||
value = max(this_khz, peer_khz);
|
||||
if (r->branch)
|
||||
value = !!value;
|
||||
|
||||
rc = clk_rpmrs_set_rate_active(r, value);
|
||||
if (rc)
|
||||
goto out;
|
||||
|
||||
value = max(this_sleep_khz, peer_sleep_khz);
|
||||
if (r->branch)
|
||||
value = !!value;
|
||||
|
||||
rc = clk_rpmrs_set_rate_sleep(r, value);
|
||||
if (rc) {
|
||||
/* Undo the active set vote and restore it to peer_khz */
|
||||
value = peer_khz;
|
||||
rc = clk_rpmrs_set_rate_active(r, value);
|
||||
}
|
||||
|
||||
out:
|
||||
if (!rc)
|
||||
r->enabled = true;
|
||||
|
||||
rt_mutex_unlock(&rpm_clock_lock);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
static void rpm_clk_unprepare(struct clk *clk)
|
||||
{
|
||||
struct rpm_clk *r = to_rpm_clk(clk);
|
||||
|
||||
rt_mutex_lock(&rpm_clock_lock);
|
||||
|
||||
if (r->c.rate) {
|
||||
uint32_t value;
|
||||
struct rpm_clk *peer = r->peer;
|
||||
unsigned long peer_khz = 0, peer_sleep_khz = 0;
|
||||
int rc;
|
||||
|
||||
/* Take peer clock's rate into account only if it's enabled. */
|
||||
if (peer->enabled)
|
||||
to_active_sleep_khz(peer, peer->c.rate,
|
||||
&peer_khz, &peer_sleep_khz);
|
||||
|
||||
value = r->branch ? !!peer_khz : peer_khz;
|
||||
rc = clk_rpmrs_set_rate_active(r, value);
|
||||
if (rc)
|
||||
goto out;
|
||||
|
||||
value = r->branch ? !!peer_sleep_khz : peer_sleep_khz;
|
||||
rc = clk_rpmrs_set_rate_sleep(r, value);
|
||||
}
|
||||
r->enabled = false;
|
||||
out:
|
||||
rt_mutex_unlock(&rpm_clock_lock);
|
||||
|
||||
}
|
||||
|
||||
static int rpm_clk_set_rate(struct clk *clk, unsigned long rate)
|
||||
{
|
||||
struct rpm_clk *r = to_rpm_clk(clk);
|
||||
unsigned long this_khz, this_sleep_khz;
|
||||
int rc = 0;
|
||||
|
||||
rt_mutex_lock(&rpm_clock_lock);
|
||||
|
||||
if (r->enabled) {
|
||||
uint32_t value;
|
||||
struct rpm_clk *peer = r->peer;
|
||||
unsigned long peer_khz = 0, peer_sleep_khz = 0;
|
||||
|
||||
to_active_sleep_khz(r, rate, &this_khz, &this_sleep_khz);
|
||||
|
||||
/* Take peer clock's rate into account only if it's enabled. */
|
||||
if (peer->enabled)
|
||||
to_active_sleep_khz(peer, peer->c.rate,
|
||||
&peer_khz, &peer_sleep_khz);
|
||||
|
||||
value = max(this_khz, peer_khz);
|
||||
rc = clk_rpmrs_set_rate_active(r, value);
|
||||
if (rc)
|
||||
goto out;
|
||||
|
||||
value = max(this_sleep_khz, peer_sleep_khz);
|
||||
rc = clk_rpmrs_set_rate_sleep(r, value);
|
||||
}
|
||||
|
||||
out:
|
||||
rt_mutex_unlock(&rpm_clock_lock);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
static unsigned long rpm_clk_get_rate(struct clk *clk)
|
||||
{
|
||||
struct rpm_clk *r = to_rpm_clk(clk);
|
||||
|
||||
if (r->rpmrs_data->get_rate_fn)
|
||||
return r->rpmrs_data->get_rate_fn(r);
|
||||
else
|
||||
return clk->rate;
|
||||
}
|
||||
|
||||
static int rpm_clk_is_enabled(struct clk *clk)
|
||||
{
|
||||
struct rpm_clk *r = to_rpm_clk(clk);
|
||||
|
||||
return r->rpmrs_data->is_enabled(r);
|
||||
}
|
||||
|
||||
static long rpm_clk_round_rate(struct clk *clk, unsigned long rate)
|
||||
{
|
||||
/* Not supported. */
|
||||
return rate;
|
||||
}
|
||||
|
||||
static bool rpm_clk_is_local(struct clk *clk)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
static enum handoff rpm_clk_handoff(struct clk *clk)
|
||||
{
|
||||
struct rpm_clk *r = to_rpm_clk(clk);
|
||||
int rc;
|
||||
|
||||
/*
|
||||
* Querying an RPM clock's status will return 0 unless the clock's
|
||||
* rate has previously been set through the RPM. When handing off,
|
||||
* assume these clocks are enabled (unless the RPM call fails) so
|
||||
* child clocks of these RPM clocks can still be handed off.
|
||||
*/
|
||||
rc = r->rpmrs_data->handoff_fn(r);
|
||||
if (rc < 0)
|
||||
return HANDOFF_DISABLED_CLK;
|
||||
|
||||
/*
|
||||
* Since RPM handoff code may update the software rate of the clock by
|
||||
* querying the RPM, we need to make sure our request to RPM now
|
||||
* matches the software rate of the clock. When we send the request
|
||||
* to RPM, we also need to update any other state info we would
|
||||
* normally update. So, call the appropriate clock function instead
|
||||
* of directly using the RPM driver APIs.
|
||||
*/
|
||||
rc = rpm_clk_prepare(clk);
|
||||
if (rc < 0)
|
||||
return HANDOFF_DISABLED_CLK;
|
||||
|
||||
return HANDOFF_ENABLED_CLK;
|
||||
}
|
||||
|
||||
#define RPM_MISC_CLK_TYPE 0x306b6c63
|
||||
#define RPM_SCALING_ENABLE_ID 0x2
|
||||
|
||||
int enable_rpm_scaling(void)
|
||||
{
|
||||
int rc, value = 0x1;
|
||||
static int is_inited;
|
||||
|
||||
struct msm_rpm_kvp kvp = {
|
||||
.key = RPM_SMD_KEY_ENABLE,
|
||||
.data = (void *)&value,
|
||||
.length = sizeof(value),
|
||||
};
|
||||
|
||||
if (is_inited)
|
||||
return 0;
|
||||
|
||||
rc = msm_rpm_send_message_noirq(MSM_RPM_CTX_SLEEP_SET,
|
||||
RPM_MISC_CLK_TYPE, RPM_SCALING_ENABLE_ID, &kvp, 1);
|
||||
if (rc < 0) {
|
||||
if (rc != -EPROBE_DEFER)
|
||||
WARN(1, "RPM clock scaling (sleep set) did not enable!\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
rc = msm_rpm_send_message_noirq(MSM_RPM_CTX_ACTIVE_SET,
|
||||
RPM_MISC_CLK_TYPE, RPM_SCALING_ENABLE_ID, &kvp, 1);
|
||||
if (rc < 0) {
|
||||
if (rc != -EPROBE_DEFER)
|
||||
WARN(1, "RPM clock scaling (active set) did not enable!\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
is_inited++;
|
||||
return 0;
|
||||
}
|
||||
|
||||
int vote_bimc(struct rpm_clk *r, uint32_t value)
|
||||
{
|
||||
int rc;
|
||||
|
||||
struct msm_rpm_kvp kvp = {
|
||||
.key = r->rpm_key,
|
||||
.data = (void *)&value,
|
||||
.length = sizeof(value),
|
||||
};
|
||||
|
||||
rc = msm_rpm_send_message_noirq(MSM_RPM_CTX_ACTIVE_SET,
|
||||
r->rpm_res_type, r->rpmrs_data->ctx_active_id,
|
||||
&kvp, 1);
|
||||
if (rc < 0) {
|
||||
if (rc != -EPROBE_DEFER)
|
||||
WARN(1, "BIMC vote not sent!\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
const struct clk_ops clk_ops_rpm = {
|
||||
.prepare = rpm_clk_prepare,
|
||||
.unprepare = rpm_clk_unprepare,
|
||||
.set_rate = rpm_clk_set_rate,
|
||||
.get_rate = rpm_clk_get_rate,
|
||||
.is_enabled = rpm_clk_is_enabled,
|
||||
.round_rate = rpm_clk_round_rate,
|
||||
.is_local = rpm_clk_is_local,
|
||||
.handoff = rpm_clk_handoff,
|
||||
};
|
||||
|
||||
const struct clk_ops clk_ops_rpm_branch = {
|
||||
.prepare = rpm_clk_prepare,
|
||||
.unprepare = rpm_clk_unprepare,
|
||||
.is_local = rpm_clk_is_local,
|
||||
.handoff = rpm_clk_handoff,
|
||||
};
|
||||
|
||||
static struct rpm_clk *rpm_clk_dt_parser_common(struct device *dev,
|
||||
struct device_node *np)
|
||||
{
|
||||
struct rpm_clk *rpm, *peer;
|
||||
struct clk *c;
|
||||
int rc = 0;
|
||||
phandle p;
|
||||
const char *str;
|
||||
|
||||
rpm = devm_kzalloc(dev, sizeof(*rpm), GFP_KERNEL);
|
||||
if (!rpm)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
rc = of_property_read_phandle_index(np, "qcom,rpm-peer", 0, &p);
|
||||
if (rc) {
|
||||
dt_err(np, "missing qcom,rpm-peer dt property\n");
|
||||
return ERR_PTR(rc);
|
||||
}
|
||||
|
||||
/* Rely on whoever's called last to setup the circular ref */
|
||||
c = msmclk_lookup_phandle(dev, p);
|
||||
if (!IS_ERR(c)) {
|
||||
uint32_t *sleep = devm_kzalloc(dev, sizeof(uint32_t),
|
||||
GFP_KERNEL);
|
||||
uint32_t *active =
|
||||
devm_kzalloc(dev, sizeof(uint32_t),
|
||||
GFP_KERNEL);
|
||||
|
||||
if (!sleep || !active)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
peer = to_rpm_clk(c);
|
||||
peer->peer = rpm;
|
||||
rpm->peer = peer;
|
||||
rpm->last_active_set_vote = active;
|
||||
peer->last_active_set_vote = active;
|
||||
rpm->last_sleep_set_vote = sleep;
|
||||
peer->last_sleep_set_vote = sleep;
|
||||
}
|
||||
|
||||
rpm->rpmrs_data = &clk_rpmrs_data_smd;
|
||||
rpm->active_only = of_device_is_compatible(np, "qcom,rpm-a-clk") ||
|
||||
of_device_is_compatible(np, "qcom,rpm-branch-a-clk");
|
||||
|
||||
rc = of_property_read_string(np, "qcom,res-type", &str);
|
||||
if (rc) {
|
||||
dt_err(np, "missing qcom,res-type dt property\n");
|
||||
return ERR_PTR(rc);
|
||||
}
|
||||
if (sscanf(str, "%4c", (char *) &rpm->rpm_res_type) <= 0)
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
rc = of_property_read_u32(np, "qcom,res-id", &rpm->rpm_clk_id);
|
||||
if (rc) {
|
||||
dt_err(np, "missing qcom,res-id dt property\n");
|
||||
return ERR_PTR(rc);
|
||||
}
|
||||
|
||||
rc = of_property_read_string(np, "qcom,key", &str);
|
||||
if (rc) {
|
||||
dt_err(np, "missing qcom,key dt property\n");
|
||||
return ERR_PTR(rc);
|
||||
}
|
||||
if (sscanf(str, "%4c", (char *) &rpm->rpm_key) <= 0)
|
||||
return ERR_PTR(-EINVAL);
|
||||
return rpm;
|
||||
}
|
||||
|
||||
static void *rpm_clk_dt_parser(struct device *dev, struct device_node *np)
|
||||
{
|
||||
struct rpm_clk *rpm;
|
||||
|
||||
rpm = rpm_clk_dt_parser_common(dev, np);
|
||||
if (IS_ERR(rpm))
|
||||
return rpm;
|
||||
|
||||
rpm->c.ops = &clk_ops_rpm;
|
||||
return msmclk_generic_clk_init(dev, np, &rpm->c);
|
||||
}
|
||||
|
||||
static void *rpm_branch_clk_dt_parser(struct device *dev,
|
||||
struct device_node *np)
|
||||
{
|
||||
struct rpm_clk *rpm;
|
||||
u32 rate;
|
||||
int rc;
|
||||
|
||||
rpm = rpm_clk_dt_parser_common(dev, np);
|
||||
if (IS_ERR(rpm))
|
||||
return rpm;
|
||||
|
||||
rpm->c.ops = &clk_ops_rpm_branch;
|
||||
rpm->branch = true;
|
||||
|
||||
rc = of_property_read_u32(np, "qcom,rcg-init-rate", &rate);
|
||||
if (!rc)
|
||||
rpm->c.rate = rate;
|
||||
|
||||
return msmclk_generic_clk_init(dev, np, &rpm->c);
|
||||
}
|
||||
MSMCLK_PARSER(rpm_clk_dt_parser, "qcom,rpm-clk", 0);
|
||||
MSMCLK_PARSER(rpm_clk_dt_parser, "qcom,rpm-a-clk", 1);
|
||||
MSMCLK_PARSER(rpm_branch_clk_dt_parser, "qcom,rpm-branch-clk", 0);
|
||||
MSMCLK_PARSER(rpm_branch_clk_dt_parser, "qcom,rpm-branch-a-clk", 1);
|
||||
202
drivers/clk/msm/clock-voter.c
Normal file
202
drivers/clk/msm/clock-voter.c
Normal file
@@ -0,0 +1,202 @@
|
||||
/* Copyright (c) 2010-2015, 2017, The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 and
|
||||
* only version 2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#define pr_fmt(fmt) "%s: " fmt, __func__
|
||||
|
||||
#include <linux/err.h>
|
||||
#include <linux/rtmutex.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/clk/msm-clk-provider.h>
|
||||
#include <soc/qcom/clock-voter.h>
|
||||
#include <soc/qcom/msm-clock-controller.h>
|
||||
|
||||
static DEFINE_RT_MUTEX(voter_clk_lock);
|
||||
|
||||
/* Aggregate the rate of clocks that are currently on. */
|
||||
static unsigned long voter_clk_aggregate_rate(const struct clk *parent)
|
||||
{
|
||||
struct clk *clk;
|
||||
unsigned long rate = 0;
|
||||
|
||||
list_for_each_entry(clk, &parent->children, siblings) {
|
||||
struct clk_voter *v = to_clk_voter(clk);
|
||||
|
||||
if (v->enabled)
|
||||
rate = max(clk->rate, rate);
|
||||
}
|
||||
return rate;
|
||||
}
|
||||
|
||||
static int voter_clk_set_rate(struct clk *clk, unsigned long rate)
|
||||
{
|
||||
int ret = 0;
|
||||
struct clk *clkp;
|
||||
struct clk_voter *clkh, *v = to_clk_voter(clk);
|
||||
unsigned long cur_rate, new_rate, other_rate = 0;
|
||||
|
||||
if (v->is_branch)
|
||||
return 0;
|
||||
|
||||
rt_mutex_lock(&voter_clk_lock);
|
||||
|
||||
if (v->enabled) {
|
||||
struct clk *parent = clk->parent;
|
||||
|
||||
/*
|
||||
* Get the aggregate rate without this clock's vote and update
|
||||
* if the new rate is different than the current rate
|
||||
*/
|
||||
list_for_each_entry(clkp, &parent->children, siblings) {
|
||||
clkh = to_clk_voter(clkp);
|
||||
if (clkh->enabled && clkh != v)
|
||||
other_rate = max(clkp->rate, other_rate);
|
||||
}
|
||||
|
||||
cur_rate = max(other_rate, clk->rate);
|
||||
new_rate = max(other_rate, rate);
|
||||
|
||||
if (new_rate != cur_rate) {
|
||||
ret = clk_set_rate(parent, new_rate);
|
||||
if (ret)
|
||||
goto unlock;
|
||||
}
|
||||
}
|
||||
clk->rate = rate;
|
||||
unlock:
|
||||
rt_mutex_unlock(&voter_clk_lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int voter_clk_prepare(struct clk *clk)
|
||||
{
|
||||
int ret = 0;
|
||||
unsigned long cur_rate;
|
||||
struct clk *parent;
|
||||
struct clk_voter *v = to_clk_voter(clk);
|
||||
|
||||
rt_mutex_lock(&voter_clk_lock);
|
||||
parent = clk->parent;
|
||||
|
||||
if (v->is_branch) {
|
||||
v->enabled = true;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/*
|
||||
* Increase the rate if this clock is voting for a higher rate
|
||||
* than the current rate.
|
||||
*/
|
||||
cur_rate = voter_clk_aggregate_rate(parent);
|
||||
if (clk->rate > cur_rate) {
|
||||
ret = clk_set_rate(parent, clk->rate);
|
||||
if (ret)
|
||||
goto out;
|
||||
}
|
||||
v->enabled = true;
|
||||
out:
|
||||
rt_mutex_unlock(&voter_clk_lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void voter_clk_unprepare(struct clk *clk)
|
||||
{
|
||||
unsigned long cur_rate, new_rate;
|
||||
struct clk *parent;
|
||||
struct clk_voter *v = to_clk_voter(clk);
|
||||
|
||||
|
||||
rt_mutex_lock(&voter_clk_lock);
|
||||
parent = clk->parent;
|
||||
|
||||
/*
|
||||
* Decrease the rate if this clock was the only one voting for
|
||||
* the highest rate.
|
||||
*/
|
||||
v->enabled = false;
|
||||
if (v->is_branch)
|
||||
goto out;
|
||||
|
||||
new_rate = voter_clk_aggregate_rate(parent);
|
||||
cur_rate = max(new_rate, clk->rate);
|
||||
|
||||
if (new_rate < cur_rate)
|
||||
clk_set_rate(parent, new_rate);
|
||||
|
||||
out:
|
||||
rt_mutex_unlock(&voter_clk_lock);
|
||||
}
|
||||
|
||||
static int voter_clk_is_enabled(struct clk *clk)
|
||||
{
|
||||
struct clk_voter *v = to_clk_voter(clk);
|
||||
|
||||
return v->enabled;
|
||||
}
|
||||
|
||||
static long voter_clk_round_rate(struct clk *clk, unsigned long rate)
|
||||
{
|
||||
return clk_round_rate(clk->parent, rate);
|
||||
}
|
||||
|
||||
static bool voter_clk_is_local(struct clk *clk)
|
||||
{
|
||||
return true;
|
||||
}
|
||||
|
||||
static enum handoff voter_clk_handoff(struct clk *clk)
|
||||
{
|
||||
if (!clk->rate)
|
||||
return HANDOFF_DISABLED_CLK;
|
||||
|
||||
/*
|
||||
* Send the default rate to the parent if necessary and update the
|
||||
* software state of the voter clock.
|
||||
*/
|
||||
if (voter_clk_prepare(clk) < 0)
|
||||
return HANDOFF_DISABLED_CLK;
|
||||
|
||||
return HANDOFF_ENABLED_CLK;
|
||||
}
|
||||
|
||||
const struct clk_ops clk_ops_voter = {
|
||||
.prepare = voter_clk_prepare,
|
||||
.unprepare = voter_clk_unprepare,
|
||||
.set_rate = voter_clk_set_rate,
|
||||
.is_enabled = voter_clk_is_enabled,
|
||||
.round_rate = voter_clk_round_rate,
|
||||
.is_local = voter_clk_is_local,
|
||||
.handoff = voter_clk_handoff,
|
||||
};
|
||||
|
||||
static void *sw_vote_clk_dt_parser(struct device *dev,
|
||||
struct device_node *np)
|
||||
{
|
||||
struct clk_voter *v;
|
||||
int rc;
|
||||
u32 temp;
|
||||
|
||||
v = devm_kzalloc(dev, sizeof(*v), GFP_KERNEL);
|
||||
if (!v)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
rc = of_property_read_u32(np, "qcom,config-rate", &temp);
|
||||
if (rc) {
|
||||
dt_prop_err(np, "qcom,config-rate", "is missing");
|
||||
return ERR_PTR(rc);
|
||||
}
|
||||
|
||||
v->c.ops = &clk_ops_voter;
|
||||
return msmclk_generic_clk_init(dev, np, &v->c);
|
||||
}
|
||||
MSMCLK_PARSER(sw_vote_clk_dt_parser, "qcom,sw-vote-clk", 0);
|
||||
1405
drivers/clk/msm/clock.c
Normal file
1405
drivers/clk/msm/clock.c
Normal file
File diff suppressed because it is too large
Load Diff
52
drivers/clk/msm/clock.h
Normal file
52
drivers/clk/msm/clock.h
Normal file
@@ -0,0 +1,52 @@
|
||||
/* Copyright (c) 2013-2014, 2017, The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* This software is licensed under the terms of the GNU General Public
|
||||
* License version 2, as published by the Free Software Foundation, and
|
||||
* may be copied, distributed, and modified under those terms.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#ifndef __DRIVERS_CLK_MSM_CLOCK_H
|
||||
#define __DRIVERS_CLK_MSM_CLOCK_H
|
||||
|
||||
#include <linux/clkdev.h>
|
||||
|
||||
/**
|
||||
* struct clock_init_data - SoC specific clock initialization data
|
||||
* @table: table of lookups to add
|
||||
* @size: size of @table
|
||||
* @pre_init: called before initializing the clock driver.
|
||||
* @post_init: called after registering @table. clock APIs can be called inside.
|
||||
* @late_init: called during late init
|
||||
*/
|
||||
struct clock_init_data {
|
||||
struct list_head list;
|
||||
struct clk_lookup *table;
|
||||
size_t size;
|
||||
void (*pre_init)(void);
|
||||
void (*post_init)(void);
|
||||
int (*late_init)(void);
|
||||
};
|
||||
|
||||
int msm_clock_init(struct clock_init_data *data);
|
||||
int find_vdd_level(struct clk *clk, unsigned long rate);
|
||||
extern struct list_head orphan_clk_list;
|
||||
|
||||
#if defined(CONFIG_DEBUG_FS) && defined(CONFIG_COMMON_CLK_MSM)
|
||||
int clock_debug_register(struct clk *clk);
|
||||
void clock_debug_print_enabled(bool print_parent);
|
||||
#elif defined(CONFIG_DEBUG_FS) && defined(CONFIG_COMMON_CLK_QCOM)
|
||||
void clock_debug_print_enabled(bool print_parent);
|
||||
#else
|
||||
static inline int clock_debug_register(struct clk *unused)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
static inline void clock_debug_print_enabled(void) { return; }
|
||||
#endif
|
||||
|
||||
#endif
|
||||
720
drivers/clk/msm/gdsc.c
Normal file
720
drivers/clk/msm/gdsc.c
Normal file
@@ -0,0 +1,720 @@
|
||||
/* Copyright (c) 2012-2017, The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 and
|
||||
* only version 2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/regulator/driver.h>
|
||||
#include <linux/regulator/machine.h>
|
||||
#include <linux/reset.h>
|
||||
#include <linux/regulator/of_regulator.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/clk/msm-clk.h>
|
||||
|
||||
#define PWR_ON_MASK BIT(31)
|
||||
#define EN_REST_WAIT_MASK (0xF << 20)
|
||||
#define EN_FEW_WAIT_MASK (0xF << 16)
|
||||
#define CLK_DIS_WAIT_MASK (0xF << 12)
|
||||
#define SW_OVERRIDE_MASK BIT(2)
|
||||
#define HW_CONTROL_MASK BIT(1)
|
||||
#define SW_COLLAPSE_MASK BIT(0)
|
||||
#define GMEM_CLAMP_IO_MASK BIT(0)
|
||||
#define GMEM_RESET_MASK BIT(4)
|
||||
#define BCR_BLK_ARES_BIT BIT(0)
|
||||
|
||||
/* Wait 2^n CXO cycles between all states. Here, n=2 (4 cycles). */
|
||||
#define EN_REST_WAIT_VAL (0x2 << 20)
|
||||
#define EN_FEW_WAIT_VAL (0x8 << 16)
|
||||
#define CLK_DIS_WAIT_VAL (0x2 << 12)
|
||||
|
||||
#define TIMEOUT_US 100
|
||||
|
||||
struct gdsc {
|
||||
struct regulator_dev *rdev;
|
||||
struct regulator_desc rdesc;
|
||||
void __iomem *gdscr;
|
||||
struct clk **clocks;
|
||||
struct reset_control **reset_clocks;
|
||||
int clock_count;
|
||||
int reset_count;
|
||||
bool toggle_mem;
|
||||
bool toggle_periph;
|
||||
bool toggle_logic;
|
||||
bool resets_asserted;
|
||||
bool root_en;
|
||||
bool force_root_en;
|
||||
int root_clk_idx;
|
||||
bool no_status_check_on_disable;
|
||||
bool is_gdsc_enabled;
|
||||
bool allow_clear;
|
||||
bool reset_aon;
|
||||
void __iomem *domain_addr;
|
||||
void __iomem *hw_ctrl_addr;
|
||||
void __iomem *sw_reset_addr;
|
||||
u32 gds_timeout;
|
||||
};
|
||||
|
||||
enum gdscr_status {
|
||||
ENABLED,
|
||||
DISABLED,
|
||||
};
|
||||
|
||||
static DEFINE_MUTEX(gdsc_seq_lock);
|
||||
|
||||
void gdsc_allow_clear_retention(struct regulator *regulator)
|
||||
{
|
||||
struct gdsc *sc = regulator_get_drvdata(regulator);
|
||||
|
||||
if (sc)
|
||||
sc->allow_clear = true;
|
||||
}
|
||||
|
||||
static int poll_gdsc_status(struct gdsc *sc, enum gdscr_status status)
|
||||
{
|
||||
void __iomem *gdscr;
|
||||
int count = sc->gds_timeout;
|
||||
u32 val;
|
||||
|
||||
if (sc->hw_ctrl_addr)
|
||||
gdscr = sc->hw_ctrl_addr;
|
||||
else
|
||||
gdscr = sc->gdscr;
|
||||
|
||||
for (; count > 0; count--) {
|
||||
val = readl_relaxed(gdscr);
|
||||
val &= PWR_ON_MASK;
|
||||
switch (status) {
|
||||
case ENABLED:
|
||||
if (val)
|
||||
return 0;
|
||||
break;
|
||||
case DISABLED:
|
||||
if (!val)
|
||||
return 0;
|
||||
break;
|
||||
}
|
||||
/*
|
||||
* There is no guarantee about the delay needed for the enable
|
||||
* bit in the GDSCR to be set or reset after the GDSC state
|
||||
* changes. Hence, keep on checking for a reasonable number
|
||||
* of times until the bit is set with the least possible delay
|
||||
* between succeessive tries.
|
||||
*/
|
||||
udelay(1);
|
||||
}
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
static int gdsc_is_enabled(struct regulator_dev *rdev)
|
||||
{
|
||||
struct gdsc *sc = rdev_get_drvdata(rdev);
|
||||
uint32_t regval;
|
||||
|
||||
if (!sc->toggle_logic)
|
||||
return !sc->resets_asserted;
|
||||
|
||||
regval = readl_relaxed(sc->gdscr);
|
||||
if (regval & PWR_ON_MASK) {
|
||||
/*
|
||||
* The GDSC might be turned on due to TZ/HYP vote on the
|
||||
* votable GDS registers. Check the SW_COLLAPSE_MASK to
|
||||
* determine if HLOS has voted for it.
|
||||
*/
|
||||
if (!(regval & SW_COLLAPSE_MASK))
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
static int gdsc_enable(struct regulator_dev *rdev)
|
||||
{
|
||||
struct gdsc *sc = rdev_get_drvdata(rdev);
|
||||
uint32_t regval, hw_ctrl_regval = 0x0;
|
||||
int i, ret = 0;
|
||||
|
||||
mutex_lock(&gdsc_seq_lock);
|
||||
|
||||
if (sc->root_en || sc->force_root_en)
|
||||
clk_prepare_enable(sc->clocks[sc->root_clk_idx]);
|
||||
|
||||
if (sc->toggle_logic) {
|
||||
if (sc->sw_reset_addr) {
|
||||
regval = readl_relaxed(sc->sw_reset_addr);
|
||||
regval |= BCR_BLK_ARES_BIT;
|
||||
writel_relaxed(regval, sc->sw_reset_addr);
|
||||
/*
|
||||
* BLK_ARES should be kept asserted for 1us before
|
||||
* being de-asserted.
|
||||
*/
|
||||
wmb();
|
||||
udelay(1);
|
||||
|
||||
regval &= ~BCR_BLK_ARES_BIT;
|
||||
writel_relaxed(regval, sc->sw_reset_addr);
|
||||
|
||||
/* Make sure de-assert goes through before continuing */
|
||||
wmb();
|
||||
}
|
||||
|
||||
if (sc->domain_addr) {
|
||||
if (sc->reset_aon) {
|
||||
regval = readl_relaxed(sc->domain_addr);
|
||||
regval |= GMEM_RESET_MASK;
|
||||
writel_relaxed(regval, sc->domain_addr);
|
||||
/*
|
||||
* Keep reset asserted for at-least 1us before
|
||||
* continuing.
|
||||
*/
|
||||
wmb();
|
||||
udelay(1);
|
||||
|
||||
regval &= ~GMEM_RESET_MASK;
|
||||
writel_relaxed(regval, sc->domain_addr);
|
||||
/*
|
||||
* Make sure GMEM_RESET is de-asserted before
|
||||
* continuing.
|
||||
*/
|
||||
wmb();
|
||||
}
|
||||
|
||||
regval = readl_relaxed(sc->domain_addr);
|
||||
regval &= ~GMEM_CLAMP_IO_MASK;
|
||||
writel_relaxed(regval, sc->domain_addr);
|
||||
/*
|
||||
* Make sure CLAMP_IO is de-asserted before continuing.
|
||||
*/
|
||||
wmb();
|
||||
}
|
||||
|
||||
regval = readl_relaxed(sc->gdscr);
|
||||
if (regval & HW_CONTROL_MASK) {
|
||||
dev_warn(&rdev->dev, "Invalid enable while %s is under HW control\n",
|
||||
sc->rdesc.name);
|
||||
mutex_unlock(&gdsc_seq_lock);
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
regval &= ~SW_COLLAPSE_MASK;
|
||||
writel_relaxed(regval, sc->gdscr);
|
||||
|
||||
/* Wait for 8 XO cycles before polling the status bit. */
|
||||
mb();
|
||||
udelay(1);
|
||||
|
||||
ret = poll_gdsc_status(sc, ENABLED);
|
||||
if (ret) {
|
||||
regval = readl_relaxed(sc->gdscr);
|
||||
if (sc->hw_ctrl_addr) {
|
||||
hw_ctrl_regval =
|
||||
readl_relaxed(sc->hw_ctrl_addr);
|
||||
dev_warn(&rdev->dev, "%s state (after %d us timeout): 0x%x, GDS_HW_CTRL: 0x%x. Re-polling.\n",
|
||||
sc->rdesc.name, sc->gds_timeout,
|
||||
regval, hw_ctrl_regval);
|
||||
|
||||
ret = poll_gdsc_status(sc, ENABLED);
|
||||
if (ret) {
|
||||
dev_err(&rdev->dev, "%s final state (after additional %d us timeout): 0x%x, GDS_HW_CTRL: 0x%x\n",
|
||||
sc->rdesc.name, sc->gds_timeout,
|
||||
readl_relaxed(sc->gdscr),
|
||||
readl_relaxed(sc->hw_ctrl_addr));
|
||||
|
||||
mutex_unlock(&gdsc_seq_lock);
|
||||
return ret;
|
||||
}
|
||||
} else {
|
||||
dev_err(&rdev->dev, "%s enable timed out: 0x%x\n",
|
||||
sc->rdesc.name,
|
||||
regval);
|
||||
udelay(sc->gds_timeout);
|
||||
regval = readl_relaxed(sc->gdscr);
|
||||
dev_err(&rdev->dev, "%s final state: 0x%x (%d us after timeout)\n",
|
||||
sc->rdesc.name, regval,
|
||||
sc->gds_timeout);
|
||||
mutex_unlock(&gdsc_seq_lock);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
for (i = 0; i < sc->reset_count; i++)
|
||||
reset_control_deassert(sc->reset_clocks[i]);
|
||||
sc->resets_asserted = false;
|
||||
}
|
||||
|
||||
for (i = 0; i < sc->clock_count; i++) {
|
||||
if (unlikely(i == sc->root_clk_idx))
|
||||
continue;
|
||||
if (sc->toggle_mem)
|
||||
clk_set_flags(sc->clocks[i], CLKFLAG_RETAIN_MEM);
|
||||
if (sc->toggle_periph)
|
||||
clk_set_flags(sc->clocks[i], CLKFLAG_RETAIN_PERIPH);
|
||||
}
|
||||
|
||||
/*
|
||||
* If clocks to this power domain were already on, they will take an
|
||||
* additional 4 clock cycles to re-enable after the rail is enabled.
|
||||
* Delay to account for this. A delay is also needed to ensure clocks
|
||||
* are not enabled within 400ns of enabling power to the memories.
|
||||
*/
|
||||
udelay(1);
|
||||
|
||||
/* Delay to account for staggered memory powerup. */
|
||||
udelay(1);
|
||||
|
||||
if (sc->force_root_en)
|
||||
clk_disable_unprepare(sc->clocks[sc->root_clk_idx]);
|
||||
sc->is_gdsc_enabled = true;
|
||||
|
||||
mutex_unlock(&gdsc_seq_lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int gdsc_disable(struct regulator_dev *rdev)
|
||||
{
|
||||
struct gdsc *sc = rdev_get_drvdata(rdev);
|
||||
uint32_t regval;
|
||||
int i, ret = 0;
|
||||
|
||||
mutex_lock(&gdsc_seq_lock);
|
||||
|
||||
if (sc->force_root_en)
|
||||
clk_prepare_enable(sc->clocks[sc->root_clk_idx]);
|
||||
|
||||
for (i = sc->clock_count-1; i >= 0; i--) {
|
||||
if (unlikely(i == sc->root_clk_idx))
|
||||
continue;
|
||||
if (sc->toggle_mem && sc->allow_clear)
|
||||
clk_set_flags(sc->clocks[i], CLKFLAG_NORETAIN_MEM);
|
||||
if (sc->toggle_periph && sc->allow_clear)
|
||||
clk_set_flags(sc->clocks[i], CLKFLAG_NORETAIN_PERIPH);
|
||||
}
|
||||
|
||||
/* Delay to account for staggered memory powerdown. */
|
||||
udelay(1);
|
||||
|
||||
if (sc->toggle_logic) {
|
||||
regval = readl_relaxed(sc->gdscr);
|
||||
if (regval & HW_CONTROL_MASK) {
|
||||
dev_warn(&rdev->dev, "Invalid disable while %s is under HW control\n",
|
||||
sc->rdesc.name);
|
||||
mutex_unlock(&gdsc_seq_lock);
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
regval |= SW_COLLAPSE_MASK;
|
||||
writel_relaxed(regval, sc->gdscr);
|
||||
/* Wait for 8 XO cycles before polling the status bit. */
|
||||
mb();
|
||||
udelay(1);
|
||||
|
||||
if (sc->no_status_check_on_disable) {
|
||||
/*
|
||||
* Add a short delay here to ensure that gdsc_enable
|
||||
* right after it was disabled does not put it in a
|
||||
* weird state.
|
||||
*/
|
||||
udelay(TIMEOUT_US);
|
||||
} else {
|
||||
ret = poll_gdsc_status(sc, DISABLED);
|
||||
if (ret)
|
||||
dev_err(&rdev->dev, "%s disable timed out: 0x%x\n",
|
||||
sc->rdesc.name, regval);
|
||||
}
|
||||
|
||||
if (sc->domain_addr) {
|
||||
regval = readl_relaxed(sc->domain_addr);
|
||||
regval |= GMEM_CLAMP_IO_MASK;
|
||||
writel_relaxed(regval, sc->domain_addr);
|
||||
/* Make sure CLAMP_IO is asserted before continuing. */
|
||||
wmb();
|
||||
}
|
||||
} else {
|
||||
for (i = sc->reset_count-1; i >= 0; i--)
|
||||
reset_control_assert(sc->reset_clocks[i]);
|
||||
sc->resets_asserted = true;
|
||||
}
|
||||
|
||||
/*
|
||||
* Check if gdsc_enable was called for this GDSC. If not, the root
|
||||
* clock will not have been enabled prior to this.
|
||||
*/
|
||||
if ((sc->is_gdsc_enabled && sc->root_en) || sc->force_root_en)
|
||||
clk_disable_unprepare(sc->clocks[sc->root_clk_idx]);
|
||||
sc->is_gdsc_enabled = false;
|
||||
|
||||
mutex_unlock(&gdsc_seq_lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static unsigned int gdsc_get_mode(struct regulator_dev *rdev)
|
||||
{
|
||||
struct gdsc *sc = rdev_get_drvdata(rdev);
|
||||
uint32_t regval;
|
||||
|
||||
mutex_lock(&gdsc_seq_lock);
|
||||
regval = readl_relaxed(sc->gdscr);
|
||||
mutex_unlock(&gdsc_seq_lock);
|
||||
if (regval & HW_CONTROL_MASK)
|
||||
return REGULATOR_MODE_FAST;
|
||||
return REGULATOR_MODE_NORMAL;
|
||||
}
|
||||
|
||||
static int gdsc_set_mode(struct regulator_dev *rdev, unsigned int mode)
|
||||
{
|
||||
struct gdsc *sc = rdev_get_drvdata(rdev);
|
||||
uint32_t regval;
|
||||
int ret = 0;
|
||||
|
||||
mutex_lock(&gdsc_seq_lock);
|
||||
|
||||
regval = readl_relaxed(sc->gdscr);
|
||||
|
||||
/*
|
||||
* HW control can only be enable/disabled when SW_COLLAPSE
|
||||
* indicates on.
|
||||
*/
|
||||
if (regval & SW_COLLAPSE_MASK) {
|
||||
dev_err(&rdev->dev, "can't enable hw collapse now\n");
|
||||
mutex_unlock(&gdsc_seq_lock);
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
switch (mode) {
|
||||
case REGULATOR_MODE_FAST:
|
||||
/* Turn on HW trigger mode */
|
||||
regval |= HW_CONTROL_MASK;
|
||||
writel_relaxed(regval, sc->gdscr);
|
||||
/*
|
||||
* There may be a race with internal HW trigger signal,
|
||||
* that will result in GDSC going through a power down and
|
||||
* up cycle. In case HW trigger signal is controlled by
|
||||
* firmware that also poll same status bits as we do, FW
|
||||
* might read an 'on' status before the GDSC can finish
|
||||
* power cycle. We wait 1us before returning to ensure
|
||||
* FW can't immediately poll the status bit.
|
||||
*/
|
||||
mb();
|
||||
udelay(1);
|
||||
break;
|
||||
|
||||
case REGULATOR_MODE_NORMAL:
|
||||
/* Turn off HW trigger mode */
|
||||
regval &= ~HW_CONTROL_MASK;
|
||||
writel_relaxed(regval, sc->gdscr);
|
||||
/*
|
||||
* There may be a race with internal HW trigger signal,
|
||||
* that will result in GDSC going through a power down and
|
||||
* up cycle. If we poll too early, status bit will
|
||||
* indicate 'on' before the GDSC can finish the power cycle.
|
||||
* Account for this case by waiting 1us before polling.
|
||||
*/
|
||||
mb();
|
||||
udelay(1);
|
||||
|
||||
ret = poll_gdsc_status(sc, ENABLED);
|
||||
if (ret)
|
||||
dev_err(&rdev->dev, "%s set_mode timed out: 0x%x\n",
|
||||
sc->rdesc.name, regval);
|
||||
break;
|
||||
default:
|
||||
ret = -EINVAL;
|
||||
break;
|
||||
}
|
||||
|
||||
mutex_unlock(&gdsc_seq_lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static struct regulator_ops gdsc_ops = {
|
||||
.is_enabled = gdsc_is_enabled,
|
||||
.enable = gdsc_enable,
|
||||
.disable = gdsc_disable,
|
||||
.set_mode = gdsc_set_mode,
|
||||
.get_mode = gdsc_get_mode,
|
||||
};
|
||||
|
||||
static int gdsc_probe(struct platform_device *pdev)
|
||||
{
|
||||
static atomic_t gdsc_count = ATOMIC_INIT(-1);
|
||||
struct regulator_config reg_config = {};
|
||||
struct regulator_init_data *init_data;
|
||||
struct resource *res;
|
||||
struct gdsc *sc;
|
||||
uint32_t regval, clk_dis_wait_val = CLK_DIS_WAIT_VAL;
|
||||
bool retain_mem, retain_periph, support_hw_trigger;
|
||||
int i, ret;
|
||||
u32 timeout;
|
||||
|
||||
sc = devm_kzalloc(&pdev->dev, sizeof(struct gdsc), GFP_KERNEL);
|
||||
if (sc == NULL)
|
||||
return -ENOMEM;
|
||||
|
||||
init_data = of_get_regulator_init_data(&pdev->dev, pdev->dev.of_node,
|
||||
&sc->rdesc);
|
||||
if (init_data == NULL)
|
||||
return -ENOMEM;
|
||||
|
||||
if (of_get_property(pdev->dev.of_node, "parent-supply", NULL))
|
||||
init_data->supply_regulator = "parent";
|
||||
|
||||
ret = of_property_read_string(pdev->dev.of_node, "regulator-name",
|
||||
&sc->rdesc.name);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
if (res == NULL)
|
||||
return -EINVAL;
|
||||
sc->gdscr = devm_ioremap(&pdev->dev, res->start, resource_size(res));
|
||||
if (sc->gdscr == NULL)
|
||||
return -ENOMEM;
|
||||
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
|
||||
"domain_addr");
|
||||
if (res) {
|
||||
sc->domain_addr = devm_ioremap(&pdev->dev, res->start,
|
||||
resource_size(res));
|
||||
if (sc->domain_addr == NULL)
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
sc->reset_aon = of_property_read_bool(pdev->dev.of_node,
|
||||
"qcom,reset-aon-logic");
|
||||
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
|
||||
"sw_reset");
|
||||
if (res) {
|
||||
sc->sw_reset_addr = devm_ioremap(&pdev->dev, res->start,
|
||||
resource_size(res));
|
||||
if (sc->sw_reset_addr == NULL)
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
|
||||
"hw_ctrl_addr");
|
||||
if (res) {
|
||||
sc->hw_ctrl_addr = devm_ioremap(&pdev->dev, res->start,
|
||||
resource_size(res));
|
||||
if (sc->hw_ctrl_addr == NULL)
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
sc->gds_timeout = TIMEOUT_US;
|
||||
ret = of_property_read_u32(pdev->dev.of_node, "qcom,gds-timeout",
|
||||
&timeout);
|
||||
if (!ret)
|
||||
sc->gds_timeout = timeout;
|
||||
|
||||
sc->clock_count = of_property_count_strings(pdev->dev.of_node,
|
||||
"clock-names");
|
||||
if (sc->clock_count == -EINVAL) {
|
||||
sc->clock_count = 0;
|
||||
} else if (IS_ERR_VALUE((unsigned long)sc->clock_count)) {
|
||||
dev_err(&pdev->dev, "Failed to get clock names\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
sc->clocks = devm_kzalloc(&pdev->dev,
|
||||
sizeof(struct clk *) * sc->clock_count, GFP_KERNEL);
|
||||
if (!sc->clocks)
|
||||
return -ENOMEM;
|
||||
|
||||
sc->root_clk_idx = -1;
|
||||
|
||||
sc->root_en = of_property_read_bool(pdev->dev.of_node,
|
||||
"qcom,enable-root-clk");
|
||||
sc->force_root_en = of_property_read_bool(pdev->dev.of_node,
|
||||
"qcom,force-enable-root-clk");
|
||||
for (i = 0; i < sc->clock_count; i++) {
|
||||
const char *clock_name;
|
||||
|
||||
of_property_read_string_index(pdev->dev.of_node, "clock-names",
|
||||
i, &clock_name);
|
||||
sc->clocks[i] = devm_clk_get(&pdev->dev, clock_name);
|
||||
if (IS_ERR(sc->clocks[i])) {
|
||||
int rc = PTR_ERR(sc->clocks[i]);
|
||||
|
||||
if (rc != -EPROBE_DEFER)
|
||||
dev_err(&pdev->dev, "Failed to get %s\n",
|
||||
clock_name);
|
||||
return rc;
|
||||
}
|
||||
|
||||
if (!strcmp(clock_name, "core_root_clk"))
|
||||
sc->root_clk_idx = i;
|
||||
}
|
||||
|
||||
if ((sc->root_en || sc->force_root_en) && (sc->root_clk_idx == -1)) {
|
||||
dev_err(&pdev->dev, "Failed to get root clock name\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
sc->rdesc.id = atomic_inc_return(&gdsc_count);
|
||||
sc->rdesc.ops = &gdsc_ops;
|
||||
sc->rdesc.type = REGULATOR_VOLTAGE;
|
||||
sc->rdesc.owner = THIS_MODULE;
|
||||
platform_set_drvdata(pdev, sc);
|
||||
|
||||
/*
|
||||
* Disable HW trigger: collapse/restore occur based on registers writes.
|
||||
* Disable SW override: Use hardware state-machine for sequencing.
|
||||
*/
|
||||
regval = readl_relaxed(sc->gdscr);
|
||||
regval &= ~(HW_CONTROL_MASK | SW_OVERRIDE_MASK);
|
||||
|
||||
if (!of_property_read_u32(pdev->dev.of_node, "qcom,clk-dis-wait-val",
|
||||
&clk_dis_wait_val))
|
||||
clk_dis_wait_val = clk_dis_wait_val << 12;
|
||||
|
||||
/* Configure wait time between states. */
|
||||
regval &= ~(EN_REST_WAIT_MASK | EN_FEW_WAIT_MASK | CLK_DIS_WAIT_MASK);
|
||||
regval |= EN_REST_WAIT_VAL | EN_FEW_WAIT_VAL | clk_dis_wait_val;
|
||||
writel_relaxed(regval, sc->gdscr);
|
||||
|
||||
sc->no_status_check_on_disable =
|
||||
of_property_read_bool(pdev->dev.of_node,
|
||||
"qcom,no-status-check-on-disable");
|
||||
retain_mem = of_property_read_bool(pdev->dev.of_node,
|
||||
"qcom,retain-mem");
|
||||
sc->toggle_mem = !retain_mem;
|
||||
retain_periph = of_property_read_bool(pdev->dev.of_node,
|
||||
"qcom,retain-periph");
|
||||
sc->toggle_periph = !retain_periph;
|
||||
sc->toggle_logic = !of_property_read_bool(pdev->dev.of_node,
|
||||
"qcom,skip-logic-collapse");
|
||||
support_hw_trigger = of_property_read_bool(pdev->dev.of_node,
|
||||
"qcom,support-hw-trigger");
|
||||
if (support_hw_trigger) {
|
||||
init_data->constraints.valid_ops_mask |= REGULATOR_CHANGE_MODE;
|
||||
init_data->constraints.valid_modes_mask |=
|
||||
REGULATOR_MODE_NORMAL | REGULATOR_MODE_FAST;
|
||||
}
|
||||
|
||||
if (!sc->toggle_logic) {
|
||||
sc->reset_count = of_property_count_strings(pdev->dev.of_node,
|
||||
"reset-names");
|
||||
if (sc->reset_count == -EINVAL) {
|
||||
sc->reset_count = 0;
|
||||
} else if (IS_ERR_VALUE((unsigned long)sc->reset_count)) {
|
||||
dev_err(&pdev->dev, "Failed to get reset reset names\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
sc->reset_clocks = devm_kzalloc(&pdev->dev,
|
||||
sizeof(struct reset_control *) *
|
||||
sc->reset_count,
|
||||
GFP_KERNEL);
|
||||
if (!sc->reset_clocks)
|
||||
return -ENOMEM;
|
||||
|
||||
for (i = 0; i < sc->reset_count; i++) {
|
||||
const char *reset_name;
|
||||
|
||||
of_property_read_string_index(pdev->dev.of_node,
|
||||
"reset-names", i, &reset_name);
|
||||
sc->reset_clocks[i] = devm_reset_control_get(&pdev->dev,
|
||||
reset_name);
|
||||
if (IS_ERR(sc->reset_clocks[i])) {
|
||||
int rc = PTR_ERR(sc->reset_clocks[i]);
|
||||
|
||||
if (rc != -EPROBE_DEFER)
|
||||
dev_err(&pdev->dev, "Failed to get %s\n",
|
||||
reset_name);
|
||||
return rc;
|
||||
}
|
||||
}
|
||||
|
||||
regval &= ~SW_COLLAPSE_MASK;
|
||||
writel_relaxed(regval, sc->gdscr);
|
||||
|
||||
ret = poll_gdsc_status(sc, ENABLED);
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev, "%s enable timed out: 0x%x\n",
|
||||
sc->rdesc.name, regval);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
sc->allow_clear = of_property_read_bool(pdev->dev.of_node,
|
||||
"qcom,disallow-clear");
|
||||
sc->allow_clear = !sc->allow_clear;
|
||||
|
||||
for (i = 0; i < sc->clock_count; i++) {
|
||||
if (retain_mem || (regval & PWR_ON_MASK) || !sc->allow_clear)
|
||||
clk_set_flags(sc->clocks[i], CLKFLAG_RETAIN_MEM);
|
||||
else
|
||||
clk_set_flags(sc->clocks[i], CLKFLAG_NORETAIN_MEM);
|
||||
|
||||
if (retain_periph || (regval & PWR_ON_MASK) || !sc->allow_clear)
|
||||
clk_set_flags(sc->clocks[i], CLKFLAG_RETAIN_PERIPH);
|
||||
else
|
||||
clk_set_flags(sc->clocks[i], CLKFLAG_NORETAIN_PERIPH);
|
||||
}
|
||||
|
||||
reg_config.dev = &pdev->dev;
|
||||
reg_config.init_data = init_data;
|
||||
reg_config.driver_data = sc;
|
||||
reg_config.of_node = pdev->dev.of_node;
|
||||
sc->rdev = regulator_register(&sc->rdesc, ®_config);
|
||||
if (IS_ERR(sc->rdev)) {
|
||||
dev_err(&pdev->dev, "regulator_register(\"%s\") failed.\n",
|
||||
sc->rdesc.name);
|
||||
return PTR_ERR(sc->rdev);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int gdsc_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct gdsc *sc = platform_get_drvdata(pdev);
|
||||
|
||||
regulator_unregister(sc->rdev);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct of_device_id gdsc_match_table[] = {
|
||||
{ .compatible = "qcom,gdsc" },
|
||||
{}
|
||||
};
|
||||
|
||||
static struct platform_driver gdsc_driver = {
|
||||
.probe = gdsc_probe,
|
||||
.remove = gdsc_remove,
|
||||
.driver = {
|
||||
.name = "gdsc",
|
||||
.of_match_table = gdsc_match_table,
|
||||
.owner = THIS_MODULE,
|
||||
},
|
||||
};
|
||||
|
||||
static int __init gdsc_init(void)
|
||||
{
|
||||
return platform_driver_register(&gdsc_driver);
|
||||
}
|
||||
subsys_initcall(gdsc_init);
|
||||
|
||||
static void __exit gdsc_exit(void)
|
||||
{
|
||||
platform_driver_unregister(&gdsc_driver);
|
||||
}
|
||||
module_exit(gdsc_exit);
|
||||
|
||||
MODULE_LICENSE("GPL v2");
|
||||
MODULE_DESCRIPTION("MSM8974 GDSC power rail regulator driver");
|
||||
747
drivers/clk/msm/msm-clock-controller.c
Normal file
747
drivers/clk/msm/msm-clock-controller.c
Normal file
@@ -0,0 +1,747 @@
|
||||
/* Copyright (c) 2014, 2016-2017, The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 and
|
||||
* only version 2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#define pr_fmt(fmt) "msmclock: %s: " fmt, __func__
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/clk.h>
|
||||
#include <linux/clkdev.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/regulator/consumer.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/hashtable.h>
|
||||
|
||||
#include <linux/clk/msm-clk-provider.h>
|
||||
#include <soc/qcom/msm-clock-controller.h>
|
||||
#include <soc/qcom/clock-rpm.h>
|
||||
|
||||
/* Protects list operations */
|
||||
static DEFINE_MUTEX(msmclk_lock);
|
||||
static LIST_HEAD(msmclk_parser_list);
|
||||
static u32 msmclk_debug;
|
||||
|
||||
struct hitem {
|
||||
struct hlist_node list;
|
||||
phandle key;
|
||||
void *ptr;
|
||||
};
|
||||
|
||||
int of_property_count_phandles(struct device_node *np, char *propname)
|
||||
{
|
||||
const __be32 *phandle;
|
||||
int size;
|
||||
|
||||
phandle = of_get_property(np, propname, &size);
|
||||
return phandle ? (size / sizeof(*phandle)) : -EINVAL;
|
||||
}
|
||||
EXPORT_SYMBOL(of_property_count_phandles);
|
||||
|
||||
int of_property_read_phandle_index(struct device_node *np, char *propname,
|
||||
int index, phandle *p)
|
||||
{
|
||||
const __be32 *phandle;
|
||||
int size;
|
||||
|
||||
phandle = of_get_property(np, propname, &size);
|
||||
if ((!phandle) || (size < sizeof(*phandle) * (index + 1)))
|
||||
return -EINVAL;
|
||||
|
||||
*p = be32_to_cpup(phandle + index);
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(of_property_read_phandle_index);
|
||||
|
||||
static int generic_vdd_parse_regulators(struct device *dev,
|
||||
struct clk_vdd_class *vdd, struct device_node *np)
|
||||
{
|
||||
int num_regulators, i, rc;
|
||||
char *name = "qcom,regulators";
|
||||
|
||||
num_regulators = of_property_count_phandles(np, name);
|
||||
if (num_regulators <= 0) {
|
||||
dt_prop_err(np, name, "missing dt property\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
vdd->regulator = devm_kzalloc(dev,
|
||||
sizeof(*vdd->regulator) * num_regulators,
|
||||
GFP_KERNEL);
|
||||
if (!vdd->regulator) {
|
||||
dt_err(np, "memory alloc failure\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
for (i = 0; i < num_regulators; i++) {
|
||||
phandle p;
|
||||
|
||||
rc = of_property_read_phandle_index(np, name, i, &p);
|
||||
if (rc) {
|
||||
dt_prop_err(np, name, "unable to read phandle\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
vdd->regulator[i] = msmclk_parse_phandle(dev, p);
|
||||
if (IS_ERR(vdd->regulator[i])) {
|
||||
dt_prop_err(np, name, "hashtable lookup failed\n");
|
||||
return PTR_ERR(vdd->regulator[i]);
|
||||
}
|
||||
}
|
||||
|
||||
vdd->num_regulators = num_regulators;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int generic_vdd_parse_levels(struct device *dev,
|
||||
struct clk_vdd_class *vdd, struct device_node *np)
|
||||
{
|
||||
int len, rc;
|
||||
char *name = "qcom,uV-levels";
|
||||
|
||||
if (!of_find_property(np, name, &len)) {
|
||||
dt_prop_err(np, name, "missing dt property\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
len /= sizeof(u32);
|
||||
if (len % vdd->num_regulators) {
|
||||
dt_err(np, "mismatch beween qcom,uV-levels and qcom,regulators dt properties\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
vdd->num_levels = len / vdd->num_regulators;
|
||||
vdd->vdd_uv = devm_kzalloc(dev, len * sizeof(*vdd->vdd_uv),
|
||||
GFP_KERNEL);
|
||||
vdd->level_votes = devm_kzalloc(dev,
|
||||
vdd->num_levels * sizeof(*vdd->level_votes),
|
||||
GFP_KERNEL);
|
||||
|
||||
if (!vdd->vdd_uv || !vdd->level_votes) {
|
||||
dt_err(np, "memory alloc failure\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
rc = of_property_read_u32_array(np, name, vdd->vdd_uv,
|
||||
vdd->num_levels * vdd->num_regulators);
|
||||
if (rc) {
|
||||
dt_prop_err(np, name, "unable to read u32 array\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Optional Property */
|
||||
name = "qcom,uA-levels";
|
||||
if (!of_find_property(np, name, &len))
|
||||
return 0;
|
||||
|
||||
len /= sizeof(u32);
|
||||
if (len / vdd->num_regulators != vdd->num_levels) {
|
||||
dt_err(np, "size of qcom,uA-levels and qcom,uV-levels must match\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
vdd->vdd_ua = devm_kzalloc(dev, len * sizeof(*vdd->vdd_ua),
|
||||
GFP_KERNEL);
|
||||
if (!vdd->vdd_ua)
|
||||
return -ENOMEM;
|
||||
|
||||
rc = of_property_read_u32_array(np, name, vdd->vdd_ua,
|
||||
vdd->num_levels * vdd->num_regulators);
|
||||
if (rc) {
|
||||
dt_prop_err(np, name, "unable to read u32 array\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void *simple_vdd_class_dt_parser(struct device *dev,
|
||||
struct device_node *np)
|
||||
{
|
||||
struct clk_vdd_class *vdd;
|
||||
int rc = 0;
|
||||
|
||||
vdd = devm_kzalloc(dev, sizeof(*vdd), GFP_KERNEL);
|
||||
if (!vdd)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
mutex_init(&vdd->lock);
|
||||
vdd->class_name = np->name;
|
||||
|
||||
rc = generic_vdd_parse_regulators(dev, vdd, np);
|
||||
rc |= generic_vdd_parse_levels(dev, vdd, np);
|
||||
if (rc) {
|
||||
dt_err(np, "unable to read vdd_class\n");
|
||||
return ERR_PTR(rc);
|
||||
}
|
||||
|
||||
return vdd;
|
||||
}
|
||||
MSMCLK_PARSER(simple_vdd_class_dt_parser, "qcom,simple-vdd-class", 0);
|
||||
|
||||
static int generic_clk_parse_parents(struct device *dev, struct clk *c,
|
||||
struct device_node *np)
|
||||
{
|
||||
int rc;
|
||||
phandle p;
|
||||
char *name = "qcom,parent";
|
||||
|
||||
/* This property is optional */
|
||||
if (!of_find_property(np, name, NULL))
|
||||
return 0;
|
||||
|
||||
rc = of_property_read_phandle_index(np, name, 0, &p);
|
||||
if (rc) {
|
||||
dt_prop_err(np, name, "unable to read phandle\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
c->parent = msmclk_parse_phandle(dev, p);
|
||||
if (IS_ERR(c->parent)) {
|
||||
dt_prop_err(np, name, "hashtable lookup failed\n");
|
||||
return PTR_ERR(c->parent);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int generic_clk_parse_vdd(struct device *dev, struct clk *c,
|
||||
struct device_node *np)
|
||||
{
|
||||
phandle p;
|
||||
int rc;
|
||||
char *name = "qcom,supply-group";
|
||||
|
||||
/* This property is optional */
|
||||
if (!of_find_property(np, name, NULL))
|
||||
return 0;
|
||||
|
||||
rc = of_property_read_phandle_index(np, name, 0, &p);
|
||||
if (rc) {
|
||||
dt_prop_err(np, name, "unable to read phandle\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
c->vdd_class = msmclk_parse_phandle(dev, p);
|
||||
if (IS_ERR(c->vdd_class)) {
|
||||
dt_prop_err(np, name, "hashtable lookup failed\n");
|
||||
return PTR_ERR(c->vdd_class);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int generic_clk_parse_flags(struct device *dev, struct clk *c,
|
||||
struct device_node *np)
|
||||
{
|
||||
int rc;
|
||||
char *name = "qcom,clk-flags";
|
||||
|
||||
/* This property is optional */
|
||||
if (!of_find_property(np, name, NULL))
|
||||
return 0;
|
||||
|
||||
rc = of_property_read_u32(np, name, &c->flags);
|
||||
if (rc) {
|
||||
dt_prop_err(np, name, "unable to read u32\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int generic_clk_parse_fmax(struct device *dev, struct clk *c,
|
||||
struct device_node *np)
|
||||
{
|
||||
u32 prop_len, i;
|
||||
int rc;
|
||||
char *name = "qcom,clk-fmax";
|
||||
|
||||
/* This property is optional */
|
||||
if (!of_find_property(np, name, &prop_len))
|
||||
return 0;
|
||||
|
||||
if (!c->vdd_class) {
|
||||
dt_err(np, "both qcom,clk-fmax and qcom,supply-group must be defined\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
prop_len /= sizeof(u32);
|
||||
if (prop_len % 2) {
|
||||
dt_prop_err(np, name, "bad length\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Value at proplen - 2 is the index of the last entry in fmax array */
|
||||
rc = of_property_read_u32_index(np, name, prop_len - 2, &c->num_fmax);
|
||||
c->num_fmax += 1;
|
||||
if (rc) {
|
||||
dt_prop_err(np, name, "unable to read u32\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
c->fmax = devm_kzalloc(dev, sizeof(*c->fmax) * c->num_fmax, GFP_KERNEL);
|
||||
if (!c->fmax)
|
||||
return -ENOMEM;
|
||||
|
||||
for (i = 0; i < prop_len; i += 2) {
|
||||
u32 level, value;
|
||||
|
||||
rc = of_property_read_u32_index(np, name, i, &level);
|
||||
if (rc) {
|
||||
dt_prop_err(np, name, "unable to read u32\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
rc = of_property_read_u32_index(np, name, i + 1, &value);
|
||||
if (rc) {
|
||||
dt_prop_err(np, name, "unable to read u32\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
if (level >= c->num_fmax) {
|
||||
dt_prop_err(np, name, "must be sorted\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
c->fmax[level] = value;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int generic_clk_add_lookup_tbl_entry(struct device *dev, struct clk *c)
|
||||
{
|
||||
struct msmclk_data *drv = dev_get_drvdata(dev);
|
||||
struct clk_lookup *cl;
|
||||
|
||||
if (drv->clk_tbl_size >= drv->max_clk_tbl_size) {
|
||||
dev_err(dev, "child node count should be > clock_count?\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
cl = drv->clk_tbl + drv->clk_tbl_size;
|
||||
cl->clk = c;
|
||||
drv->clk_tbl_size++;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int generic_clk_parse_depends(struct device *dev, struct clk *c,
|
||||
struct device_node *np)
|
||||
{
|
||||
phandle p;
|
||||
int rc;
|
||||
char *name = "qcom,depends";
|
||||
|
||||
/* This property is optional */
|
||||
if (!of_find_property(np, name, NULL))
|
||||
return 0;
|
||||
|
||||
rc = of_property_read_phandle_index(np, name, 0, &p);
|
||||
if (rc) {
|
||||
dt_prop_err(np, name, "unable to read phandle\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
c->depends = msmclk_parse_phandle(dev, p);
|
||||
if (IS_ERR(c->depends)) {
|
||||
dt_prop_err(np, name, "hashtable lookup failed\n");
|
||||
return PTR_ERR(c->depends);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int generic_clk_parse_init_config(struct device *dev, struct clk *c,
|
||||
struct device_node *np)
|
||||
{
|
||||
int rc;
|
||||
u32 temp;
|
||||
char *name = "qcom,always-on";
|
||||
|
||||
c->always_on = of_property_read_bool(np, name);
|
||||
|
||||
name = "qcom,config-rate";
|
||||
/* This property is optional */
|
||||
if (!of_find_property(np, name, NULL))
|
||||
return 0;
|
||||
|
||||
rc = of_property_read_u32(np, name, &temp);
|
||||
if (rc) {
|
||||
dt_prop_err(np, name, "unable to read u32\n");
|
||||
return rc;
|
||||
}
|
||||
c->init_rate = temp;
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
||||
void *msmclk_generic_clk_init(struct device *dev, struct device_node *np,
|
||||
struct clk *c)
|
||||
{
|
||||
int rc;
|
||||
|
||||
/* CLK_INIT macro */
|
||||
spin_lock_init(&c->lock);
|
||||
mutex_init(&c->prepare_lock);
|
||||
INIT_LIST_HEAD(&c->children);
|
||||
INIT_LIST_HEAD(&c->siblings);
|
||||
INIT_LIST_HEAD(&c->list);
|
||||
c->dbg_name = np->name;
|
||||
|
||||
rc = generic_clk_add_lookup_tbl_entry(dev, c);
|
||||
rc |= generic_clk_parse_flags(dev, c, np);
|
||||
rc |= generic_clk_parse_parents(dev, c, np);
|
||||
rc |= generic_clk_parse_vdd(dev, c, np);
|
||||
rc |= generic_clk_parse_fmax(dev, c, np);
|
||||
rc |= generic_clk_parse_depends(dev, c, np);
|
||||
rc |= generic_clk_parse_init_config(dev, c, np);
|
||||
|
||||
if (rc) {
|
||||
dt_err(np, "unable to read clk\n");
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
return c;
|
||||
}
|
||||
|
||||
static struct msmclk_parser *msmclk_parser_lookup(struct device_node *np)
|
||||
{
|
||||
struct msmclk_parser *item;
|
||||
|
||||
list_for_each_entry(item, &msmclk_parser_list, list) {
|
||||
if (of_device_is_compatible(np, item->compatible))
|
||||
return item;
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
void msmclk_parser_register(struct msmclk_parser *item)
|
||||
{
|
||||
mutex_lock(&msmclk_lock);
|
||||
list_add(&item->list, &msmclk_parser_list);
|
||||
mutex_unlock(&msmclk_lock);
|
||||
}
|
||||
|
||||
static int msmclk_htable_add(struct device *dev, void *result, phandle key);
|
||||
|
||||
void *msmclk_parse_dt_node(struct device *dev, struct device_node *np)
|
||||
{
|
||||
struct msmclk_parser *parser;
|
||||
phandle key;
|
||||
void *result;
|
||||
int rc;
|
||||
|
||||
key = np->phandle;
|
||||
result = msmclk_lookup_phandle(dev, key);
|
||||
if (!result)
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
if (!of_device_is_available(np)) {
|
||||
dt_err(np, "node is disabled\n");
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
parser = msmclk_parser_lookup(np);
|
||||
if (IS_ERR_OR_NULL(parser)) {
|
||||
dt_err(np, "no parser found\n");
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
/* This may return -EPROBE_DEFER */
|
||||
result = parser->parsedt(dev, np);
|
||||
if (IS_ERR(result)) {
|
||||
dt_err(np, "parsedt failed");
|
||||
return result;
|
||||
}
|
||||
|
||||
rc = msmclk_htable_add(dev, result, key);
|
||||
if (rc)
|
||||
return ERR_PTR(rc);
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
void *msmclk_parse_phandle(struct device *dev, phandle key)
|
||||
{
|
||||
struct hitem *item;
|
||||
struct device_node *np;
|
||||
struct msmclk_data *drv = dev_get_drvdata(dev);
|
||||
|
||||
/*
|
||||
* the default phandle value is 0. Since hashtable keys must
|
||||
* be unique, reject the default value.
|
||||
*/
|
||||
if (!key)
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
hash_for_each_possible(drv->htable, item, list, key) {
|
||||
if (item->key == key)
|
||||
return item->ptr;
|
||||
}
|
||||
|
||||
np = of_find_node_by_phandle(key);
|
||||
if (!np)
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
return msmclk_parse_dt_node(dev, np);
|
||||
}
|
||||
EXPORT_SYMBOL(msmclk_parse_phandle);
|
||||
|
||||
void *msmclk_lookup_phandle(struct device *dev, phandle key)
|
||||
{
|
||||
struct hitem *item;
|
||||
struct msmclk_data *drv = dev_get_drvdata(dev);
|
||||
|
||||
hash_for_each_possible(drv->htable, item, list, key) {
|
||||
if (item->key == key)
|
||||
return item->ptr;
|
||||
}
|
||||
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
EXPORT_SYMBOL(msmclk_lookup_phandle);
|
||||
|
||||
static int msmclk_htable_add(struct device *dev, void *data, phandle key)
|
||||
{
|
||||
struct hitem *item;
|
||||
struct msmclk_data *drv = dev_get_drvdata(dev);
|
||||
|
||||
/*
|
||||
* If there are no phandle references to a node, key == 0. However, if
|
||||
* there is a second node like this, both will have key == 0. This
|
||||
* violates the requirement that hashtable keys be unique. Skip it.
|
||||
*/
|
||||
if (!key)
|
||||
return 0;
|
||||
|
||||
if (!IS_ERR(msmclk_lookup_phandle(dev, key))) {
|
||||
struct device_node *np = of_find_node_by_phandle(key);
|
||||
|
||||
dev_err(dev, "attempt to add duplicate entry for %s\n",
|
||||
np ? np->name : "NULL");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
item = devm_kzalloc(dev, sizeof(*item), GFP_KERNEL);
|
||||
if (!item)
|
||||
return -ENOMEM;
|
||||
|
||||
INIT_HLIST_NODE(&item->list);
|
||||
item->key = key;
|
||||
item->ptr = data;
|
||||
|
||||
hash_add(drv->htable, &item->list, key);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Currently, regulators are the only elements capable of probe deferral.
|
||||
* Check them first to handle probe deferal efficiently.
|
||||
*/
|
||||
static int get_ext_regulators(struct device *dev)
|
||||
{
|
||||
int num_strings, i, rc;
|
||||
struct device_node *np;
|
||||
void *item;
|
||||
char *name = "qcom,regulator-names";
|
||||
|
||||
np = dev->of_node;
|
||||
/* This property is optional */
|
||||
num_strings = of_property_count_strings(np, name);
|
||||
if (num_strings <= 0)
|
||||
return 0;
|
||||
|
||||
for (i = 0; i < num_strings; i++) {
|
||||
const char *str;
|
||||
char buf[50];
|
||||
phandle key;
|
||||
|
||||
rc = of_property_read_string_index(np, name, i, &str);
|
||||
if (rc) {
|
||||
dt_prop_err(np, name, "unable to read string\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
item = devm_regulator_get(dev, str);
|
||||
if (IS_ERR(item)) {
|
||||
dev_err(dev, "Failed to get regulator: %s\n", str);
|
||||
return PTR_ERR(item);
|
||||
}
|
||||
|
||||
snprintf(buf, ARRAY_SIZE(buf), "%s-supply", str);
|
||||
rc = of_property_read_phandle_index(np, buf, 0, &key);
|
||||
if (rc) {
|
||||
dt_prop_err(np, buf, "unable to read phandle\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
rc = msmclk_htable_add(dev, item, key);
|
||||
if (rc)
|
||||
return rc;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct clk *msmclk_clk_get(struct of_phandle_args *clkspec, void *data)
|
||||
{
|
||||
phandle key;
|
||||
struct clk *c = ERR_PTR(-ENOENT);
|
||||
|
||||
key = clkspec->args[0];
|
||||
c = msmclk_lookup_phandle(data, key);
|
||||
|
||||
if (!IS_ERR(c) && !(c->flags & CLKFLAG_INIT_DONE))
|
||||
return ERR_PTR(-EPROBE_DEFER);
|
||||
|
||||
return c;
|
||||
}
|
||||
|
||||
static void *regulator_dt_parser(struct device *dev, struct device_node *np)
|
||||
{
|
||||
dt_err(np, "regulators should be handled in probe()");
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
MSMCLK_PARSER(regulator_dt_parser, "qcom,rpm-smd-regulator", 0);
|
||||
|
||||
static void *msmclk_dt_parser(struct device *dev, struct device_node *np)
|
||||
{
|
||||
dt_err(np, "calling into other clock controllers isn't allowed");
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
MSMCLK_PARSER(msmclk_dt_parser, "qcom,msm-clock-controller", 0);
|
||||
|
||||
static struct msmclk_data *msmclk_drv_init(struct device *dev)
|
||||
{
|
||||
struct msmclk_data *drv;
|
||||
size_t size;
|
||||
|
||||
drv = devm_kzalloc(dev, sizeof(*drv), GFP_KERNEL);
|
||||
if (!drv)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
dev_set_drvdata(dev, drv);
|
||||
|
||||
drv->dev = dev;
|
||||
INIT_LIST_HEAD(&drv->list);
|
||||
|
||||
/* This overestimates size */
|
||||
drv->max_clk_tbl_size = of_get_child_count(dev->of_node);
|
||||
size = sizeof(*drv->clk_tbl) * drv->max_clk_tbl_size;
|
||||
drv->clk_tbl = devm_kzalloc(dev, size, GFP_KERNEL);
|
||||
if (!drv->clk_tbl)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
hash_init(drv->htable);
|
||||
return drv;
|
||||
}
|
||||
|
||||
static int msmclk_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct resource *res;
|
||||
struct device *dev;
|
||||
struct msmclk_data *drv;
|
||||
struct device_node *child;
|
||||
void *result;
|
||||
int rc = 0;
|
||||
|
||||
dev = &pdev->dev;
|
||||
drv = msmclk_drv_init(dev);
|
||||
if (IS_ERR(drv))
|
||||
return PTR_ERR(drv);
|
||||
|
||||
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "cc-base");
|
||||
if (!res) {
|
||||
dt_err(dev->of_node, "missing cc-base\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
drv->base = devm_ioremap(dev, res->start, resource_size(res));
|
||||
if (!drv->base) {
|
||||
dev_err(dev, "ioremap failed for drv->base\n");
|
||||
return -ENOMEM;
|
||||
}
|
||||
rc = msmclk_htable_add(dev, drv, dev->of_node->phandle);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
rc = enable_rpm_scaling();
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
rc = get_ext_regulators(dev);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
/*
|
||||
* Returning -EPROBE_DEFER here is inefficient due to
|
||||
* destroying work 'unnecessarily'
|
||||
*/
|
||||
for_each_available_child_of_node(dev->of_node, child) {
|
||||
result = msmclk_parse_dt_node(dev, child);
|
||||
if (!IS_ERR(result))
|
||||
continue;
|
||||
if (!msmclk_debug)
|
||||
return PTR_ERR(result);
|
||||
/*
|
||||
* Parse and report all errors instead of immediately
|
||||
* exiting. Return the first error code.
|
||||
*/
|
||||
if (!rc)
|
||||
rc = PTR_ERR(result);
|
||||
}
|
||||
if (rc)
|
||||
return rc;
|
||||
|
||||
rc = of_clk_add_provider(dev->of_node, msmclk_clk_get, dev);
|
||||
if (rc) {
|
||||
dev_err(dev, "of_clk_add_provider failed\n");
|
||||
return rc;
|
||||
}
|
||||
|
||||
/*
|
||||
* can't fail after registering clocks, because users may have
|
||||
* gotten clock references. Failing would delete the memory.
|
||||
*/
|
||||
WARN_ON(msm_clock_register(drv->clk_tbl, drv->clk_tbl_size));
|
||||
dev_info(dev, "registered clocks\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct of_device_id msmclk_match_table[] = {
|
||||
{.compatible = "qcom,msm-clock-controller"},
|
||||
{}
|
||||
};
|
||||
|
||||
static struct platform_driver msmclk_driver = {
|
||||
.probe = msmclk_probe,
|
||||
.driver = {
|
||||
.name = "msm-clock-controller",
|
||||
.of_match_table = msmclk_match_table,
|
||||
.owner = THIS_MODULE,
|
||||
},
|
||||
};
|
||||
|
||||
static bool initialized;
|
||||
int __init msmclk_init(void)
|
||||
{
|
||||
int rc;
|
||||
|
||||
if (initialized)
|
||||
return 0;
|
||||
|
||||
rc = platform_driver_register(&msmclk_driver);
|
||||
if (rc)
|
||||
return rc;
|
||||
initialized = true;
|
||||
return rc;
|
||||
}
|
||||
arch_initcall(msmclk_init);
|
||||
99
drivers/clk/msm/reset.c
Normal file
99
drivers/clk/msm/reset.c
Normal file
@@ -0,0 +1,99 @@
|
||||
/* Copyright (c) 2016-2017, The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 and
|
||||
* only version 2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#include <linux/device.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/reset-controller.h>
|
||||
|
||||
#include "reset.h"
|
||||
|
||||
static int msm_reset(struct reset_controller_dev *rcdev, unsigned long id)
|
||||
{
|
||||
rcdev->ops->assert(rcdev, id);
|
||||
udelay(1);
|
||||
rcdev->ops->deassert(rcdev, id);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int
|
||||
msm_reset_assert(struct reset_controller_dev *rcdev, unsigned long id)
|
||||
{
|
||||
struct msm_reset_controller *rst;
|
||||
const struct msm_reset_map *map;
|
||||
u32 regval;
|
||||
|
||||
rst = to_msm_reset_controller(rcdev);
|
||||
map = &rst->reset_map[id];
|
||||
|
||||
regval = readl_relaxed(rst->base + map->reg);
|
||||
regval |= BIT(map->bit);
|
||||
writel_relaxed(regval, rst->base + map->reg);
|
||||
|
||||
/* Make sure the reset is asserted */
|
||||
mb();
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int
|
||||
msm_reset_deassert(struct reset_controller_dev *rcdev, unsigned long id)
|
||||
{
|
||||
struct msm_reset_controller *rst;
|
||||
const struct msm_reset_map *map;
|
||||
u32 regval;
|
||||
|
||||
rst = to_msm_reset_controller(rcdev);
|
||||
map = &rst->reset_map[id];
|
||||
|
||||
regval = readl_relaxed(rst->base + map->reg);
|
||||
regval &= ~BIT(map->bit);
|
||||
writel_relaxed(regval, rst->base + map->reg);
|
||||
|
||||
/* Make sure the reset is de-asserted */
|
||||
mb();
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
struct reset_control_ops msm_reset_ops = {
|
||||
.reset = msm_reset,
|
||||
.assert = msm_reset_assert,
|
||||
.deassert = msm_reset_deassert,
|
||||
};
|
||||
EXPORT_SYMBOL(msm_reset_ops);
|
||||
|
||||
int msm_reset_controller_register(struct platform_device *pdev,
|
||||
const struct msm_reset_map *map, unsigned int num_resets,
|
||||
void __iomem *virt_base)
|
||||
{
|
||||
struct msm_reset_controller *reset;
|
||||
int ret = 0;
|
||||
|
||||
reset = devm_kzalloc(&pdev->dev, sizeof(*reset), GFP_KERNEL);
|
||||
if (!reset)
|
||||
return -ENOMEM;
|
||||
|
||||
reset->rcdev.of_node = pdev->dev.of_node;
|
||||
reset->rcdev.ops = &msm_reset_ops;
|
||||
reset->rcdev.owner = pdev->dev.driver->owner;
|
||||
reset->rcdev.nr_resets = num_resets;
|
||||
reset->reset_map = map;
|
||||
reset->base = virt_base;
|
||||
|
||||
ret = reset_controller_register(&reset->rcdev);
|
||||
if (ret)
|
||||
dev_err(&pdev->dev, "Failed to register with reset controller\n");
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(msm_reset_controller_register);
|
||||
38
drivers/clk/msm/reset.h
Normal file
38
drivers/clk/msm/reset.h
Normal file
@@ -0,0 +1,38 @@
|
||||
/* Copyright (c) 2016-2017, The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 and
|
||||
* only version 2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#ifndef __DRIVERS_CLK_RESET_H
|
||||
#define __DRIVERS_CLK_RESET_H
|
||||
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/reset-controller.h>
|
||||
|
||||
struct msm_reset_map {
|
||||
unsigned int reg;
|
||||
u8 bit;
|
||||
};
|
||||
|
||||
struct msm_reset_controller {
|
||||
const struct msm_reset_map *reset_map;
|
||||
struct reset_controller_dev rcdev;
|
||||
void __iomem *base;
|
||||
};
|
||||
|
||||
#define to_msm_reset_controller(r) \
|
||||
container_of(r, struct msm_reset_controller, rcdev)
|
||||
|
||||
extern struct reset_control_ops msm_reset_ops;
|
||||
|
||||
int msm_reset_controller_register(struct platform_device *pdev,
|
||||
const struct msm_reset_map *map, unsigned int nr_resets,
|
||||
void __iomem *virt_base);
|
||||
#endif
|
||||
@@ -47,7 +47,11 @@
|
||||
#include <asm/cpuidle.h>
|
||||
#include "lpm-levels.h"
|
||||
#include <trace/events/power.h>
|
||||
#if defined(CONFIG_COMMON_CLK)
|
||||
#include "../clk/clk.h"
|
||||
#elif defined(CONFIG_COMMON_CLK_MSM)
|
||||
#include "../../drivers/clk/msm/clock.h"
|
||||
#endif /* CONFIG_COMMON_CLK */
|
||||
#define CREATE_TRACE_POINTS
|
||||
#include <trace/events/trace_msm_low_power.h>
|
||||
|
||||
|
||||
160
include/dt-bindings/clock/mdm-clocks-9607.h
Normal file
160
include/dt-bindings/clock/mdm-clocks-9607.h
Normal file
@@ -0,0 +1,160 @@
|
||||
/* Copyright (c) 2015, 2019, The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 and
|
||||
* only version 2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#ifndef __MDM_CLOCKS_9607_H
|
||||
#define __MDM_CLOCKS_9607_H
|
||||
|
||||
/*PLL Sources */
|
||||
#define clk_gpll0_clk_src 0x5933b69f
|
||||
#define clk_gpll0_ao_clk_src 0x6b2fb034
|
||||
#define clk_gpll2_clk_src 0x7c34503b
|
||||
#define clk_gpll1_clk_src 0x916f8847
|
||||
|
||||
#define clk_a7sspll 0x0b2e5cbd
|
||||
|
||||
/*RPM and Voter clocks */
|
||||
#define clk_pcnoc_clk 0xc1296d0f
|
||||
#define clk_pcnoc_a_clk 0x9bcffee4
|
||||
#define clk_pcnoc_msmbus_clk 0x2b53b688
|
||||
#define clk_pcnoc_msmbus_a_clk 0x9753a54f
|
||||
#define clk_pcnoc_keepalive_a_clk 0x9464f720
|
||||
#define clk_pcnoc_usb_clk 0x57adc448
|
||||
#define clk_pcnoc_usb_a_clk 0x11d6a74e
|
||||
#define clk_bimc_clk 0x4b80bf00
|
||||
#define clk_bimc_a_clk 0x4b25668a
|
||||
#define clk_bimc_msmbus_clk 0xd212feea
|
||||
#define clk_bimc_msmbus_a_clk 0x71d1a499
|
||||
#define clk_bimc_usb_clk 0x9bd2b2bf
|
||||
#define clk_bimc_usb_a_clk 0xea410834
|
||||
#define clk_qdss_clk 0x1492202a
|
||||
#define clk_qdss_a_clk 0xdd121669
|
||||
#define clk_qpic_clk 0x3ce6f7bb
|
||||
#define clk_qpic_a_clk 0xd70ccb7c
|
||||
#define clk_xo_clk_src 0x23f5649f
|
||||
#define clk_xo_a_clk_src 0x2fdd2c7c
|
||||
#define clk_xo_otg_clk 0x79bca5cc
|
||||
#define clk_xo_lpm_clk 0x2be48257
|
||||
#define clk_xo_pil_mss_clk 0xe97a8354
|
||||
#define clk_bb_clk1 0xf5304268
|
||||
#define clk_bb_clk1_pin 0x6dd0a779
|
||||
|
||||
/* SRCs */
|
||||
#define clk_apss_ahb_clk_src 0x36f8495f
|
||||
#define clk_emac_0_125m_clk_src 0x955db353
|
||||
#define clk_blsp1_qup1_i2c_apps_clk_src 0x17f78f5e
|
||||
#define clk_blsp1_qup1_spi_apps_clk_src 0xf534c4fa
|
||||
#define clk_blsp1_qup2_i2c_apps_clk_src 0x8de71c79
|
||||
#define clk_blsp1_qup2_spi_apps_clk_src 0x33cf809a
|
||||
#define clk_blsp1_qup3_i2c_apps_clk_src 0xf161b902
|
||||
#define clk_blsp1_qup3_spi_apps_clk_src 0x5e95683f
|
||||
#define clk_blsp1_qup4_i2c_apps_clk_src 0xb2ecce68
|
||||
#define clk_blsp1_qup4_spi_apps_clk_src 0xddb5bbdb
|
||||
#define clk_blsp1_qup5_i2c_apps_clk_src 0x71ea7804
|
||||
#define clk_blsp1_qup5_spi_apps_clk_src 0x9752f35f
|
||||
#define clk_blsp1_qup6_i2c_apps_clk_src 0x28806803
|
||||
#define clk_blsp1_qup6_spi_apps_clk_src 0x44a1edc4
|
||||
#define clk_blsp1_uart1_apps_clk_src 0xf8146114
|
||||
#define clk_blsp1_uart2_apps_clk_src 0xfc9c2f73
|
||||
#define clk_blsp1_uart3_apps_clk_src 0x600497f2
|
||||
#define clk_blsp1_uart4_apps_clk_src 0x56bff15c
|
||||
#define clk_blsp1_uart5_apps_clk_src 0x218ef697
|
||||
#define clk_blsp1_uart6_apps_clk_src 0x8fbdbe4c
|
||||
#define clk_crypto_clk_src 0x37a21414
|
||||
#define clk_gp1_clk_src 0xad85b97a
|
||||
#define clk_gp2_clk_src 0xfb1f0065
|
||||
#define clk_gp3_clk_src 0x63b693d6
|
||||
#define clk_pdm2_clk_src 0x31e494fd
|
||||
#define clk_sdcc1_apps_clk_src 0xd4975db2
|
||||
#define clk_sdcc2_apps_clk_src 0xfc46c821
|
||||
#define clk_emac_0_sys_25m_clk_src 0x92fe3614
|
||||
#define clk_emac_0_tx_clk_src 0x0487ec76
|
||||
#define clk_usb_hs_system_clk_src 0x28385546
|
||||
#define clk_usb_hsic_clk_src 0x141b01df
|
||||
#define clk_usb_hsic_io_cal_clk_src 0xc83584bd
|
||||
#define clk_usb_hsic_system_clk_src 0x52ef7224
|
||||
|
||||
/*Branch*/
|
||||
#define clk_gcc_apss_ahb_clk 0x2b0d39ff
|
||||
#define clk_gcc_apss_axi_clk 0x1d47f4ff
|
||||
#define clk_gcc_prng_ahb_clk 0x397e7eaa
|
||||
#define clk_gcc_qdss_dap_clk 0x7fa9aa73
|
||||
#define clk_gcc_apss_tcu_clk 0xaf56a329
|
||||
#define clk_gcc_blsp1_ahb_clk 0x8caa5b4f
|
||||
#define clk_gcc_blsp1_qup1_i2c_apps_clk 0xc303fae9
|
||||
#define clk_gcc_blsp1_qup1_spi_apps_clk 0x759a76b0
|
||||
#define clk_gcc_blsp1_qup2_i2c_apps_clk 0x1076f220
|
||||
#define clk_gcc_blsp1_qup2_spi_apps_clk 0x3e77d48f
|
||||
#define clk_gcc_blsp1_qup3_i2c_apps_clk 0x9e25ac82
|
||||
#define clk_gcc_blsp1_qup3_spi_apps_clk 0xfb978880
|
||||
#define clk_gcc_blsp1_qup4_i2c_apps_clk 0xd7f40f6f
|
||||
#define clk_gcc_blsp1_qup4_spi_apps_clk 0x80f8722f
|
||||
#define clk_gcc_blsp1_qup5_i2c_apps_clk 0xacae5604
|
||||
#define clk_gcc_blsp1_qup5_spi_apps_clk 0xbf3e15d7
|
||||
#define clk_gcc_blsp1_qup6_i2c_apps_clk 0x5c6ad820
|
||||
#define clk_gcc_blsp1_qup6_spi_apps_clk 0x780d9f85
|
||||
#define clk_gcc_blsp1_uart1_apps_clk 0xc7c62f90
|
||||
#define clk_gcc_blsp1_uart2_apps_clk 0xf8a61c96
|
||||
#define clk_gcc_blsp1_uart3_apps_clk 0xc3298bd7
|
||||
#define clk_gcc_blsp1_uart4_apps_clk 0x26be16c0
|
||||
#define clk_gcc_blsp1_uart5_apps_clk 0x28a6bc74
|
||||
#define clk_gcc_blsp1_uart6_apps_clk 0x28fd3466
|
||||
#define clk_gcc_boot_rom_ahb_clk 0xde2adeb1
|
||||
#define clk_gcc_crypto_ahb_clk 0x94de4919
|
||||
#define clk_gcc_crypto_axi_clk 0xd4415c9b
|
||||
#define clk_gcc_crypto_clk 0x00d390d2
|
||||
#define clk_gcc_gp1_clk 0x057f7b69
|
||||
#define clk_gcc_gp2_clk 0x9bf83ffd
|
||||
#define clk_gcc_gp3_clk 0xec6539ee
|
||||
#define clk_gcc_mss_cfg_ahb_clk 0x111cde81
|
||||
#define clk_gcc_mss_q6_bimc_axi_clk 0x67544d62
|
||||
#define clk_gcc_pdm2_clk 0x99d55711
|
||||
#define clk_gcc_pdm_ahb_clk 0x365664f6
|
||||
#define clk_gcc_sdcc1_ahb_clk 0x691e0caa
|
||||
#define clk_gcc_sdcc1_apps_clk 0x9ad6fb96
|
||||
#define clk_gcc_sdcc2_ahb_clk 0x23d5727f
|
||||
#define clk_gcc_sdcc2_apps_clk 0x861b20ac
|
||||
#define clk_gcc_emac_0_125m_clk 0xe556de53
|
||||
#define clk_gcc_emac_0_ahb_clk 0x6a741d38
|
||||
#define clk_gcc_emac_0_axi_clk 0xf2b04fb4
|
||||
#define clk_gcc_emac_0_rx_clk 0x869a4e5c
|
||||
#define clk_gcc_emac_0_sys_25m_clk 0x5812832b
|
||||
#define clk_gcc_emac_0_sys_clk 0x34fb62b0
|
||||
#define clk_gcc_emac_0_tx_clk 0x331d3573
|
||||
#define clk_gcc_smmu_cfg_clk 0x75eaefa5
|
||||
#define clk_gcc_usb2a_phy_sleep_clk 0x6caa736f
|
||||
#define clk_gcc_usb_hs_phy_cfg_ahb_clk 0xe13808fd
|
||||
#define clk_gcc_usb_hs_ahb_clk 0x72ce8032
|
||||
#define clk_gcc_usb_hs_system_clk 0xa11972e5
|
||||
#define clk_gcc_usb_hsic_ahb_clk 0x3ec2631a
|
||||
#define clk_gcc_usb_hsic_clk 0x8de18b0e
|
||||
#define clk_gcc_usb_hsic_io_cal_clk 0xbc21f776
|
||||
#define clk_gcc_usb_hsic_io_cal_sleep_clk 0x20e09a22
|
||||
#define clk_gcc_usb_hsic_system_clk 0x145e9366
|
||||
#define clk_gcc_usb2_hs_phy_only_clk 0x0047179d
|
||||
#define clk_gcc_qusb2_phy_clk 0x996884d5
|
||||
/* DEBUG */
|
||||
#define clk_gcc_debug_mux 0x8121ac15
|
||||
#define clk_apss_debug_pri_mux 0xc691ff55
|
||||
#define clk_apc0_m_clk 0xce1e9473
|
||||
#define clk_apc1_m_clk 0x990fbaf7
|
||||
#define clk_apc2_m_clk 0x252cd4ae
|
||||
#define clk_apc3_m_clk 0x78c64486
|
||||
#define clk_l2_m_clk 0x4bedf4d0
|
||||
|
||||
#define clk_wcnss_m_clk 0x709f430b
|
||||
|
||||
#define GCC_USB2_HS_PHY_ONLY_BCR 0
|
||||
#define GCC_QUSB2_PHY_BCR 1
|
||||
#define GCC_USB_HS_BCR 2
|
||||
#define GCC_USB_HS_HSIC_BCR 3
|
||||
|
||||
#endif
|
||||
227
include/dt-bindings/clock/mdm-clocks-hwio-9607.h
Normal file
227
include/dt-bindings/clock/mdm-clocks-hwio-9607.h
Normal file
@@ -0,0 +1,227 @@
|
||||
/* Copyright (c) 2015-2016, The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 and
|
||||
* only version 2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#ifndef __MDM_CLOCKS_9607_HWIO_H
|
||||
#define __MDM_CLOCKS_9607_HWIO_H
|
||||
|
||||
#define GPLL0_MODE 0x21000
|
||||
#define GPLL0_STATUS 0x21024
|
||||
#define GPLL1_MODE 0x20000
|
||||
#define GPLL1_STATUS 0x2001C
|
||||
#define GPLL2_MODE 0x25000
|
||||
#define GPLL2_STATUS 0x25024
|
||||
#define APCS_GPLL_ENA_VOTE 0x45000
|
||||
#define APCS_MODE 0x00018
|
||||
#define APSS_AHB_CMD_RCGR 0x46000
|
||||
#define PRNG_AHB_CBCR 0x13004
|
||||
#define EMAC_0_125M_CMD_RCGR 0x4E028
|
||||
#define BLSP1_QUP1_I2C_APPS_CMD_RCGR 0x200C
|
||||
#define BLSP1_QUP1_SPI_APPS_CMD_RCGR 0x2024
|
||||
#define BLSP1_QUP2_I2C_APPS_CMD_RCGR 0x3000
|
||||
#define BLSP1_QUP2_SPI_APPS_CMD_RCGR 0x3014
|
||||
#define BLSP1_QUP3_I2C_APPS_CMD_RCGR 0x4000
|
||||
#define BLSP1_QUP3_SPI_APPS_CMD_RCGR 0x4024
|
||||
#define BLSP1_QUP4_I2C_APPS_CMD_RCGR 0x5000
|
||||
#define BLSP1_QUP4_SPI_APPS_CMD_RCGR 0x5024
|
||||
#define BLSP1_QUP5_I2C_APPS_CMD_RCGR 0x6000
|
||||
#define BLSP1_QUP5_SPI_APPS_CMD_RCGR 0x6024
|
||||
#define BLSP1_QUP6_I2C_APPS_CMD_RCGR 0x7000
|
||||
#define BLSP1_QUP6_SPI_APPS_CMD_RCGR 0x7024
|
||||
#define BLSP1_UART1_APPS_CMD_RCGR 0x2044
|
||||
#define BLSP1_UART2_APPS_CMD_RCGR 0x3034
|
||||
#define BLSP1_UART3_APPS_CMD_RCGR 0x4044
|
||||
#define BLSP1_UART4_APPS_CMD_RCGR 0x5044
|
||||
#define BLSP1_UART5_APPS_CMD_RCGR 0x6044
|
||||
#define BLSP1_UART6_APPS_CMD_RCGR 0x7044
|
||||
#define CRYPTO_CMD_RCGR 0x16004
|
||||
#define GP1_CMD_RCGR 0x8004
|
||||
#define GP2_CMD_RCGR 0x9004
|
||||
#define GP3_CMD_RCGR 0xA004
|
||||
#define PDM2_CMD_RCGR 0x44010
|
||||
#define QPIC_CMD_RCGR 0x3F004
|
||||
#define SDCC1_APPS_CMD_RCGR 0x42004
|
||||
#define SDCC2_APPS_CMD_RCGR 0x43004
|
||||
#define EMAC_0_SYS_25M_CMD_RCGR 0x4E03C
|
||||
#define EMAC_0_TX_CMD_RCGR 0x4E014
|
||||
#define USB_HS_SYSTEM_CMD_RCGR 0x41010
|
||||
#define USB_HSIC_CMD_RCGR 0x3D018
|
||||
#define USB_HSIC_IO_CAL_CMD_RCGR 0x3D030
|
||||
#define USB_HSIC_SYSTEM_CMD_RCGR 0x3D000
|
||||
#define BIMC_PCNOC_AXI_CBCR 0x31024
|
||||
#define BLSP1_AHB_CBCR 0x1008
|
||||
#define APCS_CLOCK_BRANCH_ENA_VOTE 0x45004
|
||||
#define BLSP1_QUP1_I2C_APPS_CBCR 0x2008
|
||||
#define BLSP1_QUP1_SPI_APPS_CBCR 0x2004
|
||||
#define BLSP1_QUP2_I2C_APPS_CBCR 0x3010
|
||||
#define BLSP1_QUP2_SPI_APPS_CBCR 0x300C
|
||||
#define BLSP1_QUP3_I2C_APPS_CBCR 0x4020
|
||||
#define BLSP1_QUP3_SPI_APPS_CBCR 0x401C
|
||||
#define BLSP1_QUP4_I2C_APPS_CBCR 0x5020
|
||||
#define BLSP1_QUP4_SPI_APPS_CBCR 0x501C
|
||||
#define BLSP1_QUP5_I2C_APPS_CBCR 0x6020
|
||||
#define BLSP1_QUP5_SPI_APPS_CBCR 0x601C
|
||||
#define BLSP1_QUP6_I2C_APPS_CBCR 0x7020
|
||||
#define BLSP1_QUP6_SPI_APPS_CBCR 0x701C
|
||||
#define BLSP1_UART1_APPS_CBCR 0x203C
|
||||
#define BLSP1_UART2_APPS_CBCR 0x302C
|
||||
#define BLSP1_UART3_APPS_CBCR 0x403C
|
||||
#define BLSP1_UART4_APPS_CBCR 0x503C
|
||||
#define BLSP1_UART5_APPS_CBCR 0x603C
|
||||
#define BLSP1_UART6_APPS_CBCR 0x703C
|
||||
#define APSS_AHB_CBCR 0x4601C
|
||||
#define APSS_AXI_CBCR 0x46020
|
||||
#define BOOT_ROM_AHB_CBCR 0x1300C
|
||||
#define CRYPTO_AHB_CBCR 0x16024
|
||||
#define CRYPTO_AXI_CBCR 0x16020
|
||||
#define CRYPTO_CBCR 0x1601C
|
||||
#define GP1_CBCR 0x8000
|
||||
#define GP2_CBCR 0x9000
|
||||
#define GP3_CBCR 0xA000
|
||||
#define MSS_CFG_AHB_CBCR 0x49000
|
||||
#define MSS_Q6_BIMC_AXI_CBCR 0x49004
|
||||
#define PCNOC_APSS_AHB_CBCR 0x27030
|
||||
#define PDM2_CBCR 0x4400C
|
||||
#define PDM_AHB_CBCR 0x44004
|
||||
#define QPIC_AHB_CBCR 0x3F01C
|
||||
#define QPIC_CBCR 0x3F018
|
||||
#define QPIC_SYSTEM_CBCR 0x3F020
|
||||
#define SDCC1_AHB_CBCR 0x4201C
|
||||
#define SDCC1_APPS_CBCR 0x42018
|
||||
#define SDCC2_AHB_CBCR 0x4301C
|
||||
#define SDCC2_APPS_CBCR 0x43018
|
||||
#define EMAC_0_125M_CBCR 0x4E010
|
||||
#define EMAC_0_AHB_CBCR 0x4E000
|
||||
#define EMAC_0_AXI_CBCR 0x4E008
|
||||
#define EMAC_0_RX_CBCR 0x4E030
|
||||
#define EMAC_0_SYS_25M_CBCR 0x4E038
|
||||
#define EMAC_0_SYS_CBCR 0x4E034
|
||||
#define EMAC_0_TX_CBCR 0x4E00C
|
||||
#define APSS_TCU_CBCR 0x12018
|
||||
#define SMMU_CFG_CBCR 0x12038
|
||||
#define QDSS_DAP_CBCR 0x29084
|
||||
#define APCS_SMMU_CLOCK_BRANCH_ENA_VOTE 0x4500C
|
||||
#define USB2A_PHY_SLEEP_CBCR 0x4102C
|
||||
#define USB_HS_PHY_CFG_AHB_CBCR 0x41030
|
||||
#define USB_HS_AHB_CBCR 0x41008
|
||||
#define USB_HS_SYSTEM_CBCR 0x41004
|
||||
#define USB_HS_BCR 0x41000
|
||||
#define USB_HSIC_AHB_CBCR 0x3D04C
|
||||
#define USB_HSIC_CBCR 0x3D050
|
||||
#define USB_HSIC_IO_CAL_CBCR 0x3D054
|
||||
#define USB_HSIC_IO_CAL_SLEEP_CBCR 0x3D058
|
||||
#define USB_HSIC_SYSTEM_CBCR 0x3D048
|
||||
#define USB_HS_HSIC_BCR 0x3D05C
|
||||
#define USB2_HS_PHY_ONLY_BCR 0x41034
|
||||
#define QUSB2_PHY_BCR 0x4103C
|
||||
#define GCC_DEBUG_CLK_CTL 0x74000
|
||||
#define CLOCK_FRQ_MEASURE_CTL 0x74004
|
||||
#define CLOCK_FRQ_MEASURE_STATUS 0x74008
|
||||
#define PLLTEST_PAD_CFG 0x7400C
|
||||
#define GCC_XO_DIV4_CBCR 0x30034
|
||||
|
||||
#define xo_source_val 0
|
||||
#define xo_a_source_val 0
|
||||
#define gpll0_source_val 1
|
||||
#define gpll2_source_val 1
|
||||
#define emac_0_125m_clk_source_val 1
|
||||
#define emac_0_tx_clk_source_val 2
|
||||
|
||||
#define F(f, s, div, m, n) \
|
||||
{ \
|
||||
.freq_hz = (f), \
|
||||
.src_clk = &s##_clk_src.c, \
|
||||
.m_val = (m), \
|
||||
.n_val = ~((n)-(m)) * !!(n), \
|
||||
.d_val = ~(n),\
|
||||
.div_src_val = BVAL(4, 0, (int)(2*(div) - 1)) \
|
||||
| BVAL(10, 8, s##_source_val), \
|
||||
}
|
||||
|
||||
#define F_EXT(f, s, div, m, n) \
|
||||
{ \
|
||||
.freq_hz = (f), \
|
||||
.m_val = (m), \
|
||||
.n_val = ~((n)-(m)) * !!(n), \
|
||||
.d_val = ~(n),\
|
||||
.div_src_val = BVAL(4, 0, (int)(2*(div) - 1)) \
|
||||
| BVAL(10, 8, s##_source_val), \
|
||||
}
|
||||
|
||||
#define VDD_DIG_FMAX_MAP1(l1, f1) \
|
||||
.vdd_class = &vdd_dig, \
|
||||
.fmax = (unsigned long[VDD_DIG_NUM]) { \
|
||||
[VDD_DIG_##l1] = (f1), \
|
||||
}, \
|
||||
.num_fmax = VDD_DIG_NUM
|
||||
|
||||
#define VDD_DIG_FMAX_MAP2(l1, f1, l2, f2) \
|
||||
.vdd_class = &vdd_dig, \
|
||||
.fmax = (unsigned long[VDD_DIG_NUM]) { \
|
||||
[VDD_DIG_##l1] = (f1), \
|
||||
[VDD_DIG_##l2] = (f2), \
|
||||
}, \
|
||||
.num_fmax = VDD_DIG_NUM
|
||||
|
||||
#define VDD_DIG_FMAX_MAP3(l1, f1, l2, f2, l3, f3) \
|
||||
.vdd_class = &vdd_dig, \
|
||||
.fmax = (unsigned long[VDD_DIG_NUM]) { \
|
||||
[VDD_DIG_##l1] = (f1), \
|
||||
[VDD_DIG_##l2] = (f2), \
|
||||
[VDD_DIG_##l3] = (f3), \
|
||||
}, \
|
||||
.num_fmax = VDD_DIG_NUM
|
||||
|
||||
enum vdd_dig_levels {
|
||||
VDD_DIG_NONE,
|
||||
VDD_DIG_LOWER,
|
||||
VDD_DIG_LOW,
|
||||
VDD_DIG_NOMINAL,
|
||||
VDD_DIG_HIGH,
|
||||
VDD_DIG_NUM
|
||||
};
|
||||
|
||||
static int vdd_corner[] = {
|
||||
RPM_REGULATOR_LEVEL_NONE, /* VDD_DIG_NONE */
|
||||
RPM_REGULATOR_LEVEL_SVS, /* VDD_DIG_LOWER */
|
||||
RPM_REGULATOR_LEVEL_SVS_PLUS, /*VDD_DIG_LOW*/
|
||||
RPM_REGULATOR_LEVEL_NOM, /* VDD_DIG_NOMINAL */
|
||||
RPM_REGULATOR_LEVEL_TURBO, /* VDD_DIG_HIGH */
|
||||
};
|
||||
|
||||
static DEFINE_VDD_REGULATORS(vdd_dig, VDD_DIG_NUM, 1, vdd_corner, NULL);
|
||||
|
||||
|
||||
#define VDD_STROMER_FMAX_MAP1(l1, f1) \
|
||||
.vdd_class = &vdd_stromer_pll, \
|
||||
.fmax = (unsigned long[VDD_DIG_NUM]) { \
|
||||
[VDD_DIG_##l1] = (f1), \
|
||||
}, \
|
||||
.num_fmax = VDD_DIG_NUM
|
||||
|
||||
|
||||
#define RPM_MISC_CLK_TYPE 0x306b6c63
|
||||
#define RPM_BUS_CLK_TYPE 0x316b6c63
|
||||
#define RPM_MEM_CLK_TYPE 0x326b6c63
|
||||
#define RPM_SMD_KEY_ENABLE 0x62616E45
|
||||
#define RPM_QPIC_CLK_TYPE 0x63697071
|
||||
|
||||
#define XO_ID 0x0
|
||||
#define QDSS_ID 0x1
|
||||
#define PCNOC_ID 0x0
|
||||
#define BIMC_ID 0x0
|
||||
#define QPIC_ID 0x0
|
||||
|
||||
/* XO clock */
|
||||
#define BB_CLK1_ID 1
|
||||
#define RF_CLK2_ID 5
|
||||
|
||||
#endif
|
||||
18
include/dt-bindings/clock/msm-clocks-a7.h
Normal file
18
include/dt-bindings/clock/msm-clocks-a7.h
Normal file
@@ -0,0 +1,18 @@
|
||||
/* Copyright (c) 2015, The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 and
|
||||
* only version 2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#ifndef __MSM_CLOCKS_A7_H
|
||||
#define __MSM_CLOCKS_A7_H
|
||||
|
||||
#define clk_a7ssmux 0x3ea882af
|
||||
|
||||
#endif
|
||||
@@ -1069,6 +1069,12 @@ static inline void clk_writel(u32 val, u32 __iomem *reg)
|
||||
struct dentry *clk_debugfs_add_file(struct clk_hw *hw, char *name, umode_t mode,
|
||||
void *data, const struct file_operations *fops);
|
||||
#endif
|
||||
#else
|
||||
struct of_device_id;
|
||||
|
||||
static inline void __init of_clk_init(const struct of_device_id *matches)
|
||||
{
|
||||
}
|
||||
|
||||
#endif /* CONFIG_COMMON_CLK */
|
||||
#endif /* CLK_PROVIDER_H */
|
||||
|
||||
@@ -204,11 +204,13 @@ static inline long clk_get_phase(struct clk *clk)
|
||||
return -ENOTSUPP;
|
||||
}
|
||||
|
||||
#ifndef CONFIG_COMMON_CLK_MSM
|
||||
static inline int clk_set_duty_cycle(struct clk *clk, unsigned int num,
|
||||
unsigned int den)
|
||||
{
|
||||
return -ENOTSUPP;
|
||||
}
|
||||
#endif
|
||||
|
||||
static inline unsigned int clk_get_scaled_duty_cycle(struct clk *clk,
|
||||
unsigned int scale)
|
||||
@@ -721,7 +723,7 @@ static inline void clk_bulk_disable_unprepare(int num_clks,
|
||||
clk_bulk_unprepare(num_clks, clks);
|
||||
}
|
||||
|
||||
#if defined(CONFIG_OF) && defined(CONFIG_COMMON_CLK)
|
||||
#if defined(CONFIG_OF)
|
||||
struct clk *of_clk_get(struct device_node *np, int index);
|
||||
struct clk *of_clk_get_by_name(struct device_node *np, const char *name);
|
||||
struct clk *of_clk_get_from_provider(struct of_phandle_args *clkspec);
|
||||
|
||||
21
include/linux/clk/gdsc.h
Normal file
21
include/linux/clk/gdsc.h
Normal file
@@ -0,0 +1,21 @@
|
||||
/* Copyright (c) 2015, 2017, The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 and
|
||||
* only version 2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#ifndef __GDSC_H
|
||||
#define __GDSC_H
|
||||
|
||||
#include <linux/regulator/consumer.h>
|
||||
|
||||
/* Allow the clock memories to be turned off */
|
||||
void gdsc_allow_clear_retention(struct regulator *regulator);
|
||||
|
||||
#endif
|
||||
269
include/linux/clk/msm-clk-provider.h
Normal file
269
include/linux/clk/msm-clk-provider.h
Normal file
@@ -0,0 +1,269 @@
|
||||
/* Copyright (C) 2007 Google, Inc.
|
||||
* Copyright (c) 2007-2017, The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* This software is licensed under the terms of the GNU General Public
|
||||
* License version 2, as published by the Free Software Foundation, and
|
||||
* may be copied, distributed, and modified under those terms.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#ifndef __MSM_CLK_PROVIDER_H
|
||||
#define __MSM_CLK_PROVIDER_H
|
||||
|
||||
#include <linux/types.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/clkdev.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/regulator/consumer.h>
|
||||
#include <linux/seq_file.h>
|
||||
#include <linux/clk/msm-clk.h>
|
||||
|
||||
#if defined(CONFIG_COMMON_CLK_MSM)
|
||||
/*
|
||||
* Bit manipulation macros
|
||||
*/
|
||||
#define BM(msb, lsb) (((((uint32_t)-1) << (31-msb)) >> (31-msb+lsb)) << lsb)
|
||||
#define BVAL(msb, lsb, val) (((val) << lsb) & BM(msb, lsb))
|
||||
|
||||
/*
|
||||
* Halt/Status Checking Mode Macros
|
||||
*/
|
||||
#define HALT 0 /* Bit pol: 1 = halted */
|
||||
#define NOCHECK 1 /* No bit to check, do nothing */
|
||||
#define HALT_VOTED 2 /* Bit pol: 1 = halted; delay on disable */
|
||||
#define ENABLE 3 /* Bit pol: 1 = running */
|
||||
#define ENABLE_VOTED 4 /* Bit pol: 1 = running; delay on disable */
|
||||
#define DELAY 5 /* No bit to check, just delay */
|
||||
|
||||
struct clk_register_data {
|
||||
char *name;
|
||||
u32 offset;
|
||||
};
|
||||
#ifdef CONFIG_DEBUG_FS
|
||||
void clk_debug_print_hw(struct clk *clk, struct seq_file *f);
|
||||
#else
|
||||
static inline void clk_debug_print_hw(struct clk *clk, struct seq_file *f) {}
|
||||
#endif
|
||||
|
||||
#define CLK_WARN(clk, cond, fmt, ...) do { \
|
||||
clk_debug_print_hw(clk, NULL); \
|
||||
WARN(cond, "%s: " fmt, clk_name(clk), ##__VA_ARGS__); \
|
||||
} while (0)
|
||||
|
||||
/**
|
||||
* struct clk_vdd_class - Voltage scaling class
|
||||
* @class_name: name of the class
|
||||
* @regulator: array of regulators.
|
||||
* @num_regulators: size of regulator array. Standard regulator APIs will be
|
||||
used if this field > 0.
|
||||
* @set_vdd: function to call when applying a new voltage setting.
|
||||
* @vdd_uv: sorted 2D array of legal voltage settings. Indexed by level, then
|
||||
regulator.
|
||||
* @vdd_ua: sorted 2D array of legal cureent settings. Indexed by level, then
|
||||
regulator. Optional parameter.
|
||||
* @level_votes: array of votes for each level.
|
||||
* @num_levels: specifies the size of level_votes array.
|
||||
* @skip_handoff: do not vote for the max possible voltage during init
|
||||
* @use_max_uV: use INT_MAX for max_uV when calling regulator_set_voltage
|
||||
* This is useful when different vdd_class share same regulator.
|
||||
* @cur_level: the currently set voltage level
|
||||
* @lock: lock to protect this struct
|
||||
*/
|
||||
struct clk_vdd_class {
|
||||
const char *class_name;
|
||||
struct regulator **regulator;
|
||||
int num_regulators;
|
||||
int (*set_vdd)(struct clk_vdd_class *v_class, int level);
|
||||
int *vdd_uv;
|
||||
int *vdd_ua;
|
||||
int *level_votes;
|
||||
int num_levels;
|
||||
bool skip_handoff;
|
||||
bool use_max_uV;
|
||||
unsigned long cur_level;
|
||||
struct mutex lock;
|
||||
};
|
||||
|
||||
#define DEFINE_VDD_CLASS(_name, _set_vdd, _num_levels) \
|
||||
struct clk_vdd_class _name = { \
|
||||
.class_name = #_name, \
|
||||
.set_vdd = _set_vdd, \
|
||||
.level_votes = (int [_num_levels]) {}, \
|
||||
.num_levels = _num_levels, \
|
||||
.cur_level = _num_levels, \
|
||||
.lock = __MUTEX_INITIALIZER(_name.lock) \
|
||||
}
|
||||
|
||||
#define DEFINE_VDD_REGULATORS(_name, _num_levels, _num_regulators, _vdd_uv, \
|
||||
_vdd_ua) \
|
||||
struct clk_vdd_class _name = { \
|
||||
.class_name = #_name, \
|
||||
.vdd_uv = _vdd_uv, \
|
||||
.vdd_ua = _vdd_ua, \
|
||||
.regulator = (struct regulator * [_num_regulators]) {}, \
|
||||
.num_regulators = _num_regulators, \
|
||||
.level_votes = (int [_num_levels]) {}, \
|
||||
.num_levels = _num_levels, \
|
||||
.cur_level = _num_levels, \
|
||||
.lock = __MUTEX_INITIALIZER(_name.lock) \
|
||||
}
|
||||
|
||||
#define DEFINE_VDD_REGS_INIT(_name, _num_regulators) \
|
||||
struct clk_vdd_class _name = { \
|
||||
.class_name = #_name, \
|
||||
.regulator = (struct regulator * [_num_regulators]) {}, \
|
||||
.num_regulators = _num_regulators, \
|
||||
.lock = __MUTEX_INITIALIZER(_name.lock) \
|
||||
}
|
||||
|
||||
enum handoff {
|
||||
HANDOFF_ENABLED_CLK,
|
||||
HANDOFF_DISABLED_CLK,
|
||||
};
|
||||
|
||||
struct clk_ops {
|
||||
int (*prepare)(struct clk *clk);
|
||||
int (*enable)(struct clk *clk);
|
||||
void (*disable)(struct clk *clk);
|
||||
void (*unprepare)(struct clk *clk);
|
||||
void (*enable_hwcg)(struct clk *clk);
|
||||
void (*disable_hwcg)(struct clk *clk);
|
||||
int (*in_hwcg_mode)(struct clk *clk);
|
||||
enum handoff (*handoff)(struct clk *clk);
|
||||
int (*reset)(struct clk *clk, enum clk_reset_action action);
|
||||
int (*pre_set_rate)(struct clk *clk, unsigned long new_rate);
|
||||
int (*set_rate)(struct clk *clk, unsigned long rate);
|
||||
void (*post_set_rate)(struct clk *clk, unsigned long old_rate);
|
||||
int (*set_max_rate)(struct clk *clk, unsigned long rate);
|
||||
int (*set_flags)(struct clk *clk, unsigned long flags);
|
||||
int (*set_duty_cycle)(struct clk *clk, u32 numerator, u32 denominator);
|
||||
unsigned long (*get_rate)(struct clk *clk);
|
||||
long (*list_rate)(struct clk *clk, unsigned long n);
|
||||
int (*is_enabled)(struct clk *clk);
|
||||
long (*round_rate)(struct clk *clk, unsigned long rate);
|
||||
int (*set_parent)(struct clk *clk, struct clk *parent);
|
||||
struct clk *(*get_parent)(struct clk *clk);
|
||||
bool (*is_local)(struct clk *clk);
|
||||
void __iomem *(*list_registers)(struct clk *clk, int n,
|
||||
struct clk_register_data **regs, u32 *size);
|
||||
};
|
||||
|
||||
/**
|
||||
* struct clk
|
||||
* @prepare_count: prepare refcount
|
||||
* @prepare_lock: protects clk_prepare()/clk_unprepare() path and @prepare_count
|
||||
* @count: enable refcount
|
||||
* @lock: protects clk_enable()/clk_disable() path and @count
|
||||
* @depends: non-direct parent of clock to enable when this clock is enabled
|
||||
* @vdd_class: voltage scaling requirement class
|
||||
* @fmax: maximum frequency in Hz supported at each voltage level
|
||||
* @parent: the current source of this clock
|
||||
* @opp_table_populated: tracks if the OPP table of this clock has been filled
|
||||
*/
|
||||
struct clk {
|
||||
uint32_t flags;
|
||||
const struct clk_ops *ops;
|
||||
const char *dbg_name;
|
||||
struct clk *depends;
|
||||
struct clk_vdd_class *vdd_class;
|
||||
unsigned long *fmax;
|
||||
int num_fmax;
|
||||
unsigned long rate;
|
||||
struct clk *parent;
|
||||
struct clk_src *parents;
|
||||
unsigned int num_parents;
|
||||
|
||||
struct list_head children;
|
||||
struct list_head siblings;
|
||||
struct list_head list;
|
||||
|
||||
unsigned long count;
|
||||
unsigned long notifier_count;
|
||||
spinlock_t lock;
|
||||
unsigned long prepare_count;
|
||||
struct mutex prepare_lock;
|
||||
|
||||
unsigned long init_rate;
|
||||
bool always_on;
|
||||
bool opp_table_populated;
|
||||
|
||||
struct dentry *clk_dir;
|
||||
};
|
||||
|
||||
#define CLK_INIT(name) \
|
||||
.lock = __SPIN_LOCK_UNLOCKED((name).lock), \
|
||||
.prepare_lock = __MUTEX_INITIALIZER((name).prepare_lock), \
|
||||
.children = LIST_HEAD_INIT((name).children), \
|
||||
.siblings = LIST_HEAD_INIT((name).siblings), \
|
||||
.list = LIST_HEAD_INIT((name).list)
|
||||
|
||||
bool is_rate_valid(struct clk *clk, unsigned long rate);
|
||||
int vote_vdd_level(struct clk_vdd_class *vdd_class, int level);
|
||||
int unvote_vdd_level(struct clk_vdd_class *vdd_class, int level);
|
||||
int __clk_pre_reparent(struct clk *c, struct clk *new, unsigned long *flags);
|
||||
void __clk_post_reparent(struct clk *c, struct clk *old, unsigned long *flags);
|
||||
|
||||
/* Register clocks with the MSM clock driver */
|
||||
int msm_clock_register(struct clk_lookup *table, size_t size);
|
||||
int of_msm_clock_register(struct device_node *np, struct clk_lookup *table,
|
||||
size_t size);
|
||||
|
||||
int clock_rcgwr_init(struct platform_device *pdev);
|
||||
int clock_rcgwr_disable(struct platform_device *pdev);
|
||||
|
||||
extern struct clk dummy_clk;
|
||||
extern const struct clk_ops clk_ops_dummy;
|
||||
|
||||
#define CLK_DUMMY(clk_name, clk_id, clk_dev, flags) { \
|
||||
.con_id = clk_name, \
|
||||
.dev_id = clk_dev, \
|
||||
.clk = &dummy_clk, \
|
||||
}
|
||||
|
||||
#define DEFINE_CLK_DUMMY(name, _rate) \
|
||||
static struct fixed_clk name = { \
|
||||
.c = { \
|
||||
.dbg_name = #name, \
|
||||
.rate = _rate, \
|
||||
.ops = &clk_ops_dummy, \
|
||||
CLK_INIT(name.c), \
|
||||
}, \
|
||||
}
|
||||
|
||||
#define CLK_LOOKUP(con, c, dev) { .con_id = con, .clk = &c, .dev_id = dev }
|
||||
#define CLK_LOOKUP_OF(con, _c, dev) { .con_id = con, .clk = &(&_c)->c, \
|
||||
.dev_id = dev, .of_idx = clk_##_c }
|
||||
#define CLK_LIST(_c) { .clk = &(&_c)->c, .of_idx = clk_##_c }
|
||||
|
||||
static inline bool is_better_rate(unsigned long req, unsigned long best,
|
||||
unsigned long new)
|
||||
{
|
||||
if (IS_ERR_VALUE(new))
|
||||
return false;
|
||||
|
||||
return (req <= new && new < best) || (best < req && best < new);
|
||||
}
|
||||
|
||||
extern int of_clk_add_provider(struct device_node *np,
|
||||
struct clk *(*clk_src_get)(struct of_phandle_args *args,
|
||||
void *data),
|
||||
void *data);
|
||||
extern void of_clk_del_provider(struct device_node *np);
|
||||
|
||||
static inline const char *clk_name(struct clk *c)
|
||||
{
|
||||
if (IS_ERR_OR_NULL(c))
|
||||
return "(null)";
|
||||
return c->dbg_name;
|
||||
};
|
||||
#endif /* CONFIG_COMMON_CLK_MSM */
|
||||
#endif
|
||||
125
include/linux/clk/msm-clk.h
Normal file
125
include/linux/clk/msm-clk.h
Normal file
@@ -0,0 +1,125 @@
|
||||
/* Copyright (c) 2009, 2012-2017, The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 and
|
||||
* only version 2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#ifndef __MACH_CLK_H
|
||||
#define __MACH_CLK_H
|
||||
|
||||
#include <linux/notifier.h>
|
||||
|
||||
#define CLKFLAG_INVERT 0x00000001
|
||||
#define CLKFLAG_NOINVERT 0x00000002
|
||||
#define CLKFLAG_NONEST 0x00000004
|
||||
#define CLKFLAG_NORESET 0x00000008
|
||||
#define CLKFLAG_RETAIN_PERIPH 0x00000010
|
||||
#define CLKFLAG_NORETAIN_PERIPH 0x00000020
|
||||
#define CLKFLAG_RETAIN_MEM 0x00000040
|
||||
#define CLKFLAG_NORETAIN_MEM 0x00000080
|
||||
#define CLKFLAG_SKIP_HANDOFF 0x00000100
|
||||
#define CLKFLAG_MIN 0x00000400
|
||||
#define CLKFLAG_MAX 0x00000800
|
||||
#define CLKFLAG_INIT_DONE 0x00001000
|
||||
#define CLKFLAG_INIT_ERR 0x00002000
|
||||
#define CLKFLAG_NO_RATE_CACHE 0x00004000
|
||||
#define CLKFLAG_MEASURE 0x00008000
|
||||
#define CLKFLAG_EPROBE_DEFER 0x00010000
|
||||
#define CLKFLAG_PERIPH_OFF_SET 0x00020000
|
||||
#define CLKFLAG_PERIPH_OFF_CLEAR 0x00040000
|
||||
|
||||
struct clk_lookup;
|
||||
struct clk;
|
||||
|
||||
enum clk_reset_action {
|
||||
CLK_RESET_DEASSERT = 0,
|
||||
CLK_RESET_ASSERT = 1
|
||||
};
|
||||
|
||||
struct clk_src {
|
||||
struct clk *src;
|
||||
int sel;
|
||||
};
|
||||
|
||||
/* Rate is maximum clock rate in Hz */
|
||||
int clk_set_max_rate(struct clk *clk, unsigned long rate);
|
||||
|
||||
/* Assert/Deassert reset to a hardware block associated with a clock */
|
||||
int clk_reset(struct clk *clk, enum clk_reset_action action);
|
||||
|
||||
/* Set clock-specific configuration parameters */
|
||||
int clk_set_flags(struct clk *clk, unsigned long flags);
|
||||
|
||||
/* returns the mux selection index associated with a particular parent */
|
||||
int parent_to_src_sel(struct clk_src *parents, int num_parents, struct clk *p);
|
||||
|
||||
/* returns the mux selection index associated with a particular parent */
|
||||
int clk_get_parent_sel(struct clk *c, struct clk *parent);
|
||||
|
||||
/**
|
||||
* DOC: clk notifier callback types
|
||||
*
|
||||
* PRE_RATE_CHANGE - called immediately before the clk rate is changed,
|
||||
* to indicate that the rate change will proceed. Drivers must
|
||||
* immediately terminate any operations that will be affected by the
|
||||
* rate change. Callbacks may either return NOTIFY_DONE, NOTIFY_OK,
|
||||
* NOTIFY_STOP or NOTIFY_BAD.
|
||||
*
|
||||
* ABORT_RATE_CHANGE: called if the rate change failed for some reason
|
||||
* after PRE_RATE_CHANGE. In this case, all registered notifiers on
|
||||
* the clk will be called with ABORT_RATE_CHANGE. Callbacks must
|
||||
* always return NOTIFY_DONE or NOTIFY_OK.
|
||||
*
|
||||
* POST_RATE_CHANGE - called after the clk rate change has successfully
|
||||
* completed. Callbacks must always return NOTIFY_DONE or NOTIFY_OK.
|
||||
*
|
||||
*/
|
||||
#define PRE_RATE_CHANGE BIT(0)
|
||||
#define POST_RATE_CHANGE BIT(1)
|
||||
#define ABORT_RATE_CHANGE BIT(2)
|
||||
|
||||
/**
|
||||
* struct msm_clk_notifier - associate a clk with a notifier
|
||||
* @clk: struct clk * to associate the notifier with
|
||||
* @notifier_head: a blocking_notifier_head for this clk
|
||||
* @node: linked list pointers
|
||||
*
|
||||
* A list of struct clk_notifier is maintained by the notifier code.
|
||||
* An entry is created whenever code registers the first notifier on a
|
||||
* particular @clk. Future notifiers on that @clk are added to the
|
||||
* @notifier_head.
|
||||
*/
|
||||
struct msm_clk_notifier {
|
||||
struct clk *clk;
|
||||
struct srcu_notifier_head notifier_head;
|
||||
struct list_head node;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct msm_clk_notifier_data - rate data to pass to the notifier callback
|
||||
* @clk: struct clk * being changed
|
||||
* @old_rate: previous rate of this clk
|
||||
* @new_rate: new rate of this clk
|
||||
*
|
||||
* For a pre-notifier, old_rate is the clk's rate before this rate
|
||||
* change, and new_rate is what the rate will be in the future. For a
|
||||
* post-notifier, old_rate and new_rate are both set to the clk's
|
||||
* current rate (this was done to optimize the implementation).
|
||||
*/
|
||||
struct msm_clk_notifier_data {
|
||||
struct clk *clk;
|
||||
unsigned long old_rate;
|
||||
unsigned long new_rate;
|
||||
};
|
||||
|
||||
int msm_clk_notif_register(struct clk *clk, struct notifier_block *nb);
|
||||
|
||||
int msm_clk_notif_unregister(struct clk *clk, struct notifier_block *nb);
|
||||
|
||||
#endif
|
||||
309
include/linux/clk/msm-clock-generic.h
Normal file
309
include/linux/clk/msm-clock-generic.h
Normal file
@@ -0,0 +1,309 @@
|
||||
/* Copyright (c) 2013-2017, The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 and
|
||||
* only version 2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#ifndef __MSM_CLOCK_GENERIC_H
|
||||
#define __MSM_CLOCK_GENERIC_H
|
||||
|
||||
#include <linux/clk/msm-clk-provider.h>
|
||||
#include <linux/of.h>
|
||||
|
||||
/**
|
||||
* struct fixed_clk - fixed rate clock
|
||||
* @c: clk
|
||||
*/
|
||||
struct fixed_clk {
|
||||
struct clk c;
|
||||
};
|
||||
|
||||
/* ==================== Mux clock ==================== */
|
||||
|
||||
struct mux_clk;
|
||||
|
||||
struct clk_mux_ops {
|
||||
int (*set_mux_sel)(struct mux_clk *clk, int sel);
|
||||
int (*get_mux_sel)(struct mux_clk *clk);
|
||||
|
||||
/* Optional */
|
||||
bool (*is_enabled)(struct mux_clk *clk);
|
||||
int (*enable)(struct mux_clk *clk);
|
||||
void (*disable)(struct mux_clk *clk);
|
||||
void __iomem *(*list_registers)(struct mux_clk *clk, int n,
|
||||
struct clk_register_data **regs, u32 *size);
|
||||
};
|
||||
|
||||
#define MUX_SRC_LIST(...) \
|
||||
.parents = (struct clk_src[]){__VA_ARGS__}, \
|
||||
.num_parents = ARRAY_SIZE(((struct clk_src[]){__VA_ARGS__}))
|
||||
|
||||
#define MUX_REC_SRC_LIST(...) \
|
||||
.rec_parents = (struct clk * []){__VA_ARGS__}, \
|
||||
.num_rec_parents = ARRAY_SIZE(((struct clk * []){__VA_ARGS__}))
|
||||
|
||||
struct mux_clk {
|
||||
/* Parents in decreasing order of preference for obtaining rates. */
|
||||
struct clk_src *parents;
|
||||
int num_parents;
|
||||
/* Recursively search for the requested parent in rec_parents. */
|
||||
struct clk **rec_parents;
|
||||
int num_rec_parents;
|
||||
struct clk *safe_parent;
|
||||
int safe_sel;
|
||||
unsigned long safe_freq;
|
||||
/*
|
||||
* Before attempting a clk_round_rate on available sources, attempt a
|
||||
* clk_get_rate on all those sources. If one of them is already at the
|
||||
* necessary rate, that source will be used.
|
||||
*/
|
||||
bool try_get_rate;
|
||||
struct clk_mux_ops *ops;
|
||||
/*
|
||||
* Set if you need the mux to try a new parent before falling back to
|
||||
* the current parent. If the safe_parent field above is set, then the
|
||||
* safe_sel intermediate source will only be used if we fall back to
|
||||
* to the current parent during mux_set_rate.
|
||||
*/
|
||||
bool try_new_parent;
|
||||
|
||||
/* Fields not used by helper function. */
|
||||
void *const __iomem *base;
|
||||
u32 offset;
|
||||
u32 en_offset;
|
||||
u32 mask;
|
||||
u32 shift;
|
||||
u32 en_mask;
|
||||
/*
|
||||
* Set post divider for debug mux in order to divide the clock
|
||||
* by post_div + 1.
|
||||
*/
|
||||
u32 post_div;
|
||||
int low_power_sel;
|
||||
void *priv;
|
||||
|
||||
struct clk c;
|
||||
};
|
||||
|
||||
static inline struct mux_clk *to_mux_clk(struct clk *c)
|
||||
{
|
||||
return container_of(c, struct mux_clk, c);
|
||||
}
|
||||
|
||||
extern const struct clk_ops clk_ops_gen_mux;
|
||||
|
||||
/* ==================== Divider clock ==================== */
|
||||
|
||||
struct div_clk;
|
||||
|
||||
struct clk_div_ops {
|
||||
int (*set_div)(struct div_clk *clk, int div);
|
||||
int (*get_div)(struct div_clk *clk);
|
||||
bool (*is_enabled)(struct div_clk *clk);
|
||||
int (*enable)(struct div_clk *clk);
|
||||
void (*disable)(struct div_clk *clk);
|
||||
void __iomem *(*list_registers)(struct div_clk *clk, int n,
|
||||
struct clk_register_data **regs, u32 *size);
|
||||
};
|
||||
|
||||
struct div_data {
|
||||
unsigned int div;
|
||||
unsigned int min_div;
|
||||
unsigned int max_div;
|
||||
unsigned long rate_margin;
|
||||
/*
|
||||
* Indicate whether this divider clock supports half-integer divider.
|
||||
* If it is, all the min_div and max_div have been doubled. It means
|
||||
* they are 2*N.
|
||||
*/
|
||||
bool is_half_divider;
|
||||
/*
|
||||
* Skip odd dividers since the hardware may not support them.
|
||||
*/
|
||||
bool skip_odd_div;
|
||||
bool skip_even_div;
|
||||
bool allow_div_one;
|
||||
unsigned int cached_div;
|
||||
};
|
||||
|
||||
struct div_clk {
|
||||
struct div_data data;
|
||||
|
||||
/*
|
||||
* Some implementations may require the divider to be set to a "safe"
|
||||
* value that allows reprogramming of upstream clocks without violating
|
||||
* voltage constraints.
|
||||
*/
|
||||
unsigned long safe_freq;
|
||||
|
||||
/* Optional */
|
||||
struct clk_div_ops *ops;
|
||||
|
||||
/* Fields not used by helper function. */
|
||||
void *const __iomem *base;
|
||||
u32 offset;
|
||||
u32 mask;
|
||||
u32 shift;
|
||||
u32 en_mask;
|
||||
void *priv;
|
||||
struct clk c;
|
||||
};
|
||||
|
||||
static inline struct div_clk *to_div_clk(struct clk *c)
|
||||
{
|
||||
return container_of(c, struct div_clk, c);
|
||||
}
|
||||
|
||||
extern const struct clk_ops clk_ops_div;
|
||||
extern const struct clk_ops clk_ops_slave_div;
|
||||
|
||||
struct ext_clk {
|
||||
struct clk c;
|
||||
struct device *dev;
|
||||
char *clk_id;
|
||||
};
|
||||
|
||||
long parent_round_rate(struct clk *c, unsigned long rate);
|
||||
unsigned long parent_get_rate(struct clk *c);
|
||||
int parent_set_rate(struct clk *c, unsigned long rate);
|
||||
|
||||
static inline struct ext_clk *to_ext_clk(struct clk *c)
|
||||
{
|
||||
return container_of(c, struct ext_clk, c);
|
||||
}
|
||||
|
||||
extern const struct clk_ops clk_ops_ext;
|
||||
|
||||
#define DEFINE_FIXED_DIV_CLK(clk_name, _div, _parent) \
|
||||
static struct div_clk clk_name = { \
|
||||
.data = { \
|
||||
.max_div = _div, \
|
||||
.min_div = _div, \
|
||||
.div = _div, \
|
||||
}, \
|
||||
.c = { \
|
||||
.parent = _parent, \
|
||||
.dbg_name = #clk_name, \
|
||||
.ops = &clk_ops_div, \
|
||||
CLK_INIT(clk_name.c), \
|
||||
} \
|
||||
}
|
||||
|
||||
#define DEFINE_FIXED_SLAVE_DIV_CLK(clk_name, _div, _parent) \
|
||||
static struct div_clk clk_name = { \
|
||||
.data = { \
|
||||
.max_div = _div, \
|
||||
.min_div = _div, \
|
||||
.div = _div, \
|
||||
}, \
|
||||
.c = { \
|
||||
.parent = _parent, \
|
||||
.dbg_name = #clk_name, \
|
||||
.ops = &clk_ops_slave_div, \
|
||||
CLK_INIT(clk_name.c), \
|
||||
} \
|
||||
}
|
||||
|
||||
#define DEFINE_EXT_CLK(clk_name, _parent) \
|
||||
static struct ext_clk clk_name = { \
|
||||
.c = { \
|
||||
.parent = _parent, \
|
||||
.dbg_name = #clk_name, \
|
||||
.ops = &clk_ops_ext, \
|
||||
CLK_INIT(clk_name.c), \
|
||||
} \
|
||||
}
|
||||
|
||||
/* ==================== Mux Div clock ==================== */
|
||||
|
||||
struct mux_div_clk;
|
||||
|
||||
/*
|
||||
* struct mux_div_ops
|
||||
* the enable and disable ops are optional.
|
||||
*/
|
||||
|
||||
struct mux_div_ops {
|
||||
int (*set_src_div)(struct mux_div_clk *, u32 src_sel, u32 div);
|
||||
void (*get_src_div)(struct mux_div_clk *, u32 *src_sel, u32 *div);
|
||||
int (*enable)(struct mux_div_clk *);
|
||||
void (*disable)(struct mux_div_clk *);
|
||||
bool (*is_enabled)(struct mux_div_clk *);
|
||||
void __iomem *(*list_registers)(struct mux_div_clk *md, int n,
|
||||
struct clk_register_data **regs, u32 *size);
|
||||
};
|
||||
|
||||
/*
|
||||
* struct mux_div_clk - combined mux/divider clock
|
||||
* @priv
|
||||
parameters needed by ops
|
||||
* @safe_freq
|
||||
when switching rates from A to B, the mux div clock will
|
||||
instead switch from A -> safe_freq -> B. This allows the
|
||||
mux_div clock to change rates while enabled, even if this
|
||||
behavior is not supported by the parent clocks.
|
||||
|
||||
If changing the rate of parent A also causes the rate of
|
||||
parent B to change, then safe_freq must be defined.
|
||||
|
||||
safe_freq is expected to have a source clock which is always
|
||||
on and runs at only one rate.
|
||||
* @parents
|
||||
list of parents and mux indicies
|
||||
* @ops
|
||||
function pointers for hw specific operations
|
||||
* @src_sel
|
||||
the mux index which will be used if the clock is enabled.
|
||||
* @try_get_rate
|
||||
Set if you need the mux to directly jump to a source
|
||||
that is at the desired rate currently.
|
||||
* @force_enable_md
|
||||
Set if the mux-div needs to be force enabled/disabled during
|
||||
clk_enable/disable.
|
||||
*/
|
||||
|
||||
struct mux_div_clk {
|
||||
/* Required parameters */
|
||||
struct mux_div_ops *ops;
|
||||
struct div_data data;
|
||||
struct clk_src *parents;
|
||||
u32 num_parents;
|
||||
|
||||
struct clk c;
|
||||
|
||||
/* Internal */
|
||||
u32 src_sel;
|
||||
|
||||
/* Optional parameters */
|
||||
void *priv;
|
||||
void __iomem *base;
|
||||
u32 div_mask;
|
||||
u32 div_offset;
|
||||
u32 div_shift;
|
||||
u32 src_mask;
|
||||
u32 src_offset;
|
||||
u32 src_shift;
|
||||
u32 en_mask;
|
||||
u32 en_offset;
|
||||
|
||||
u32 safe_div;
|
||||
struct clk *safe_parent;
|
||||
unsigned long safe_freq;
|
||||
bool try_get_rate;
|
||||
bool force_enable_md;
|
||||
};
|
||||
|
||||
static inline struct mux_div_clk *to_mux_div_clk(struct clk *clk)
|
||||
{
|
||||
return container_of(clk, struct mux_div_clk, c);
|
||||
}
|
||||
|
||||
extern const struct clk_ops clk_ops_mux_div_clk;
|
||||
|
||||
#endif
|
||||
@@ -1,5 +1,4 @@
|
||||
/*
|
||||
* Copyright (c) 2016, The Linux Foundation. All rights reserved.
|
||||
/* Copyright (c) 2016-2017, The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 and
|
||||
@@ -9,12 +8,12 @@
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
*/
|
||||
|
||||
#ifndef __LINUX_CLK_QCOM_H_
|
||||
#define __LINUX_CLK_QCOM_H_
|
||||
|
||||
#if defined(CONFIG_COMMON_CLK_QCOM)
|
||||
enum branch_mem_flags {
|
||||
CLKFLAG_RETAIN_PERIPH,
|
||||
CLKFLAG_NORETAIN_PERIPH,
|
||||
@@ -23,5 +22,8 @@ enum branch_mem_flags {
|
||||
CLKFLAG_PERIPH_OFF_SET,
|
||||
CLKFLAG_PERIPH_OFF_CLEAR,
|
||||
};
|
||||
#elif defined(CONFIG_COMMON_CLK_MSM)
|
||||
#include <linux/clk/msm-clk.h>
|
||||
#endif /* CONFIG_COMMON_CLK_QCOM */
|
||||
|
||||
#endif /* __LINUX_CLK_QCOM_H_ */
|
||||
|
||||
@@ -22,6 +22,7 @@ struct clk_lookup {
|
||||
struct list_head node;
|
||||
const char *dev_id;
|
||||
const char *con_id;
|
||||
int of_idx;
|
||||
struct clk *clk;
|
||||
struct clk_hw *clk_hw;
|
||||
};
|
||||
|
||||
104
include/soc/qcom/clock-alpha-pll.h
Normal file
104
include/soc/qcom/clock-alpha-pll.h
Normal file
@@ -0,0 +1,104 @@
|
||||
/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 and
|
||||
* only version 2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#ifndef __ARCH_ARM_MACH_MSM_CLOCK_ALPHA_PLL_H
|
||||
#define __ARCH_ARM_MACH_MSM_CLOCK_ALPHA_PLL_H
|
||||
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/clk/msm-clk-provider.h>
|
||||
|
||||
struct alpha_pll_masks {
|
||||
u32 lock_mask; /* lock_det bit */
|
||||
u32 active_mask; /* active_flag in FSM mode */
|
||||
u32 update_mask; /* update bit for dynamic update */
|
||||
u32 vco_mask; /* vco_sel bits */
|
||||
u32 vco_shift;
|
||||
u32 alpha_en_mask; /* alpha_en bit */
|
||||
u32 output_mask; /* pllout_* bits */
|
||||
u32 post_div_mask;
|
||||
u32 cal_l_val_mask;
|
||||
|
||||
u32 test_ctl_lo_mask;
|
||||
u32 test_ctl_hi_mask;
|
||||
};
|
||||
|
||||
struct alpha_pll_vco_tbl {
|
||||
u32 vco_val;
|
||||
unsigned long min_freq;
|
||||
unsigned long max_freq;
|
||||
};
|
||||
|
||||
#define VCO(a, b, c) { \
|
||||
.vco_val = a,\
|
||||
.min_freq = b,\
|
||||
.max_freq = c,\
|
||||
}
|
||||
|
||||
struct alpha_pll_clk {
|
||||
struct alpha_pll_masks *masks;
|
||||
|
||||
void *const __iomem *base;
|
||||
|
||||
u32 offset;
|
||||
u32 fabia_frac_offset;
|
||||
|
||||
/* if fsm_en_mask is set, config PLL to FSM mode */
|
||||
u32 fsm_reg_offset;
|
||||
u32 fsm_en_mask;
|
||||
|
||||
u32 enable_config; /* bitmask of outputs to be enabled */
|
||||
u32 post_div_config; /* masked post divider setting */
|
||||
u32 config_ctl_val; /* config register init value */
|
||||
u32 test_ctl_lo_val; /* test control settings */
|
||||
u32 test_ctl_hi_val;
|
||||
u32 cal_l_val; /* Calibration L value */
|
||||
|
||||
struct alpha_pll_vco_tbl *vco_tbl;
|
||||
u32 num_vco;
|
||||
u32 current_vco_val;
|
||||
bool inited;
|
||||
bool slew;
|
||||
bool no_prepared_reconfig;
|
||||
|
||||
/* some PLLs support dynamically updating their rate
|
||||
* without disabling the PLL first. Set this flag
|
||||
* to enable this support.
|
||||
*/
|
||||
bool dynamic_update;
|
||||
|
||||
/*
|
||||
* Some chipsets need the offline request bit to be
|
||||
* cleared on a second write to the register, even though
|
||||
* SW wants the bit to be set. Set this flag to indicate
|
||||
* that the workaround is required.
|
||||
*/
|
||||
bool offline_bit_workaround;
|
||||
bool no_irq_dis;
|
||||
bool is_fabia;
|
||||
unsigned long min_supported_freq;
|
||||
struct clk c;
|
||||
};
|
||||
|
||||
static inline struct alpha_pll_clk *to_alpha_pll_clk(struct clk *c)
|
||||
{
|
||||
return container_of(c, struct alpha_pll_clk, c);
|
||||
}
|
||||
|
||||
|
||||
#endif
|
||||
extern void __init_alpha_pll(struct clk *c);
|
||||
extern const struct clk_ops clk_ops_alpha_pll;
|
||||
extern const struct clk_ops clk_ops_alpha_pll_hwfsm;
|
||||
extern const struct clk_ops clk_ops_fixed_alpha_pll;
|
||||
extern const struct clk_ops clk_ops_dyna_alpha_pll;
|
||||
extern const struct clk_ops clk_ops_fixed_fabia_alpha_pll;
|
||||
extern const struct clk_ops clk_ops_fabia_alpha_pll;
|
||||
273
include/soc/qcom/clock-local2.h
Normal file
273
include/soc/qcom/clock-local2.h
Normal file
@@ -0,0 +1,273 @@
|
||||
/* Copyright (c) 2012-2017, The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 and
|
||||
* only version 2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#ifndef __ARCH_ARM_MACH_MSM_CLOCK_LOCAL_2_H
|
||||
#define __ARCH_ARM_MACH_MSM_CLOCK_LOCAL_2_H
|
||||
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/clk/msm-clk-provider.h>
|
||||
#include <linux/clk/msm-clk.h>
|
||||
|
||||
/*
|
||||
* Generic frequency-definition structs and macros
|
||||
*/
|
||||
|
||||
/**
|
||||
* @freq_hz: output rate
|
||||
* @src_freq: source freq for dynamic pll. For fixed plls, set to 0.
|
||||
* @src_clk: source clock for freq_hz
|
||||
* @m_val: M value corresponding to freq_hz
|
||||
* @n_val: N value corresponding to freq_hz
|
||||
* @d_val: D value corresponding to freq_hz
|
||||
* @div_src_val: Pre divider value and source selection mux index for freq_hz
|
||||
* @sys_vdd: Voltage level required for freq_hz
|
||||
*/
|
||||
struct clk_freq_tbl {
|
||||
unsigned long freq_hz;
|
||||
unsigned long src_freq;
|
||||
struct clk *src_clk;
|
||||
u32 m_val;
|
||||
u32 n_val;
|
||||
u32 d_val;
|
||||
u32 div_src_val;
|
||||
const unsigned long sys_vdd;
|
||||
};
|
||||
|
||||
#define FREQ_END (ULONG_MAX-1)
|
||||
#define F_END { .freq_hz = FREQ_END }
|
||||
#define FIXED_CLK_SRC 0
|
||||
/*
|
||||
* Generic clock-definition struct and macros
|
||||
*/
|
||||
/**
|
||||
* struct rcg_clk - root clock generator
|
||||
* @cmd_rcgr_reg: command register
|
||||
* @mnd_reg_width: Width of MND register
|
||||
* @set_rate: function to set frequency
|
||||
* @freq_tbl: frequency table for this RCG
|
||||
* @current_freq: current RCG frequency
|
||||
* @c: generic clock data
|
||||
* @non_local_children: set if RCG has at least one branch owned by a diff EE
|
||||
* @non_local_control_timeout: configurable RCG timeout needed when all RCG
|
||||
* children can be controlled by an entity outside of
|
||||
HLOS.
|
||||
* @force_enable_rcgr: set if RCG needs to be force enabled/disabled during
|
||||
* power sequence
|
||||
* @base: pointer to base address of ioremapped registers.
|
||||
*/
|
||||
struct rcg_clk {
|
||||
u32 cmd_rcgr_reg;
|
||||
u32 mnd_reg_width;
|
||||
|
||||
void (*set_rate)(struct rcg_clk *, struct clk_freq_tbl *);
|
||||
|
||||
struct clk_freq_tbl *freq_tbl;
|
||||
struct clk_freq_tbl *current_freq;
|
||||
struct clk c;
|
||||
|
||||
bool non_local_children;
|
||||
int non_local_control_timeout;
|
||||
bool force_enable_rcgr;
|
||||
|
||||
void *const __iomem *base;
|
||||
};
|
||||
|
||||
static inline struct rcg_clk *to_rcg_clk(struct clk *clk)
|
||||
{
|
||||
return container_of(clk, struct rcg_clk, c);
|
||||
}
|
||||
|
||||
extern struct clk_freq_tbl rcg_dummy_freq;
|
||||
|
||||
/**
|
||||
* struct branch_clk - branch clock
|
||||
* @set_rate: Set the frequency of this branch clock.
|
||||
* @c: clk
|
||||
* @cbcr_reg: branch control register
|
||||
* @bcr_reg: block reset register
|
||||
* @has_sibling: true if other branches are derived from this branch's source
|
||||
* @cur_div: current branch divider value
|
||||
* @max_div: maximum branch divider value (if zero, no divider exists)
|
||||
* @halt_check: halt checking type
|
||||
* @toggle_memory: toggle memory during enable/disable if true
|
||||
* @no_halt_check_on_disable: When set, do not check status bit during
|
||||
* clk_disable().
|
||||
* @check_enable_bit: Check the enable bit to determine clock status
|
||||
during handoff.
|
||||
* @aggr_sibling_rates: Set if there are multiple branch clocks with rate
|
||||
setting capability on the common RCG.
|
||||
* @is_prepared: Set if clock's prepare count is greater than 0.
|
||||
* @base: pointer to base address of ioremapped registers.
|
||||
*/
|
||||
struct branch_clk {
|
||||
void (*set_rate)(struct branch_clk *, struct clk_freq_tbl *);
|
||||
struct clk c;
|
||||
u32 cbcr_reg;
|
||||
u32 bcr_reg;
|
||||
int has_sibling;
|
||||
u32 cur_div;
|
||||
u32 max_div;
|
||||
const u32 halt_check;
|
||||
bool toggle_memory;
|
||||
bool no_halt_check_on_disable;
|
||||
bool check_enable_bit;
|
||||
bool aggr_sibling_rates;
|
||||
bool is_prepared;
|
||||
|
||||
void *const __iomem *base;
|
||||
};
|
||||
|
||||
static inline struct branch_clk *to_branch_clk(struct clk *clk)
|
||||
{
|
||||
return container_of(clk, struct branch_clk, c);
|
||||
}
|
||||
|
||||
/**
|
||||
* struct local_vote_clk - Voteable branch clock
|
||||
* @c: clk
|
||||
* @cbcr_reg: branch control register
|
||||
* @vote_reg: voting register
|
||||
* @en_mask: enable mask
|
||||
* @halt_check: halt checking type
|
||||
* @base: pointer to base address of ioremapped registers.
|
||||
* An on/off switch with a rate derived from the parent.
|
||||
*/
|
||||
struct local_vote_clk {
|
||||
struct clk c;
|
||||
u32 cbcr_reg;
|
||||
u32 vote_reg;
|
||||
u32 bcr_reg;
|
||||
u32 en_mask;
|
||||
const u32 halt_check;
|
||||
|
||||
void * __iomem *base;
|
||||
};
|
||||
|
||||
static inline struct local_vote_clk *to_local_vote_clk(struct clk *clk)
|
||||
{
|
||||
return container_of(clk, struct local_vote_clk, c);
|
||||
}
|
||||
|
||||
/**
|
||||
* struct reset_clk - Reset clock
|
||||
* @c: clk
|
||||
* @reset_reg: block reset register
|
||||
* @base: pointer to base address of ioremapped registers.
|
||||
*/
|
||||
struct reset_clk {
|
||||
struct clk c;
|
||||
u32 reset_reg;
|
||||
|
||||
void *__iomem *base;
|
||||
};
|
||||
|
||||
static inline struct reset_clk *to_reset_clk(struct clk *clk)
|
||||
{
|
||||
return container_of(clk, struct reset_clk, c);
|
||||
}
|
||||
/**
|
||||
* struct measure_clk - for rate measurement debug use
|
||||
* @sample_ticks: sample period in reference clock ticks
|
||||
* @multiplier: measurement scale-up factor
|
||||
* @divider: measurement scale-down factor
|
||||
* @c: clk
|
||||
*/
|
||||
struct measure_clk {
|
||||
u64 sample_ticks;
|
||||
u32 multiplier;
|
||||
u32 divider;
|
||||
|
||||
struct clk c;
|
||||
};
|
||||
|
||||
struct measure_clk_data {
|
||||
struct clk *cxo;
|
||||
u32 plltest_reg;
|
||||
u32 plltest_val;
|
||||
u32 xo_div4_cbcr;
|
||||
u32 ctl_reg;
|
||||
u32 status_reg;
|
||||
|
||||
void *const __iomem *base;
|
||||
};
|
||||
|
||||
static inline struct measure_clk *to_measure_clk(struct clk *clk)
|
||||
{
|
||||
return container_of(clk, struct measure_clk, c);
|
||||
}
|
||||
|
||||
/**
|
||||
* struct gate_clk
|
||||
* @c: clk
|
||||
* @en_mask: ORed with @en_reg to enable gate clk
|
||||
* @en_reg: register used to enable/disable gate clk
|
||||
* @base: pointer to base address of ioremapped registers
|
||||
*/
|
||||
struct gate_clk {
|
||||
struct clk c;
|
||||
u32 en_mask;
|
||||
u32 en_reg;
|
||||
unsigned int delay_us;
|
||||
|
||||
void *const __iomem *base;
|
||||
};
|
||||
|
||||
static inline struct gate_clk *to_gate_clk(struct clk *clk)
|
||||
{
|
||||
return container_of(clk, struct gate_clk, c);
|
||||
}
|
||||
|
||||
/*
|
||||
* Generic set-rate implementations
|
||||
*/
|
||||
void set_rate_mnd(struct rcg_clk *clk, struct clk_freq_tbl *nf);
|
||||
void set_rate_hid(struct rcg_clk *clk, struct clk_freq_tbl *nf);
|
||||
|
||||
/*
|
||||
* Variables from the clock-local driver
|
||||
*/
|
||||
extern spinlock_t local_clock_reg_lock;
|
||||
|
||||
extern const struct clk_ops clk_ops_empty;
|
||||
extern const struct clk_ops clk_ops_rcg;
|
||||
extern const struct clk_ops clk_ops_rcg_mnd;
|
||||
extern const struct clk_ops clk_ops_branch;
|
||||
extern const struct clk_ops clk_ops_vote;
|
||||
extern const struct clk_ops clk_ops_rcg_hdmi;
|
||||
extern const struct clk_ops clk_ops_rcg_edp;
|
||||
extern const struct clk_ops clk_ops_byte;
|
||||
extern const struct clk_ops clk_ops_pixel;
|
||||
extern const struct clk_ops clk_ops_byte_multiparent;
|
||||
extern const struct clk_ops clk_ops_pixel_multiparent;
|
||||
extern const struct clk_ops clk_ops_edppixel;
|
||||
extern const struct clk_ops clk_ops_gate;
|
||||
extern const struct clk_ops clk_ops_rst;
|
||||
extern struct clk_mux_ops mux_reg_ops;
|
||||
extern struct mux_div_ops rcg_mux_div_ops;
|
||||
extern const struct clk_div_ops postdiv_reg_ops;
|
||||
|
||||
enum handoff pixel_rcg_handoff(struct clk *clk);
|
||||
enum handoff byte_rcg_handoff(struct clk *clk);
|
||||
unsigned long measure_get_rate(struct clk *c);
|
||||
|
||||
/*
|
||||
* Clock definition macros
|
||||
*/
|
||||
#define DEFINE_CLK_MEASURE(name) \
|
||||
struct clk name = { \
|
||||
.ops = &clk_ops_empty, \
|
||||
.dbg_name = #name, \
|
||||
CLK_INIT(name), \
|
||||
} \
|
||||
|
||||
#endif /* __ARCH_ARM_MACH_MSM_CLOCK_LOCAL_2_H */
|
||||
|
||||
233
include/soc/qcom/clock-pll.h
Normal file
233
include/soc/qcom/clock-pll.h
Normal file
@@ -0,0 +1,233 @@
|
||||
/* Copyright (c) 2012-2015, 2017-2018, The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 and
|
||||
* only version 2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#ifndef __ARCH_ARM_MACH_MSM_CLOCK_PLL_H
|
||||
#define __ARCH_ARM_MACH_MSM_CLOCK_PLL_H
|
||||
|
||||
#include <linux/clk/msm-clk-provider.h>
|
||||
|
||||
/**
|
||||
* struct pll_freq_tbl - generic PLL frequency definition
|
||||
* @freq_hz: pll frequency in hz
|
||||
* @l_val: pll l value
|
||||
* @m_val: pll m value
|
||||
* @n_val: pll n value
|
||||
* @post_div_val: pll post divider value
|
||||
* @pre_div_val: pll pre-divider value
|
||||
* @vco_val: pll vco value
|
||||
*/
|
||||
struct pll_freq_tbl {
|
||||
const u32 freq_hz;
|
||||
const u32 l_val;
|
||||
const u32 m_val;
|
||||
const u32 n_val;
|
||||
const u32 post_div_val;
|
||||
const u32 pre_div_val;
|
||||
const u32 vco_val;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct pll_config_masks - PLL config masks struct
|
||||
* @post_div_mask: mask for post divider bits location
|
||||
* @pre_div_mask: mask for pre-divider bits location
|
||||
* @vco_mask: mask for vco bits location
|
||||
* @mn_en_mask: ORed with pll config register to enable the mn counter
|
||||
* @main_output_mask: ORed with pll config register to enable the main output
|
||||
* @apc_pdn_mask: ORed with pll config register to enable/disable APC PDN
|
||||
* @lock_mask: Mask that indicates that the PLL has locked
|
||||
*/
|
||||
struct pll_config_masks {
|
||||
u32 apc_pdn_mask;
|
||||
u32 post_div_mask;
|
||||
u32 pre_div_mask;
|
||||
u32 vco_mask;
|
||||
u32 mn_en_mask;
|
||||
u32 main_output_mask;
|
||||
u32 early_output_mask;
|
||||
u32 lock_mask;
|
||||
};
|
||||
|
||||
struct pll_config_vals {
|
||||
u32 post_div_masked;
|
||||
u32 pre_div_masked;
|
||||
u32 config_ctl_val;
|
||||
u32 config_ctl_hi_val;
|
||||
u32 test_ctl_lo_val;
|
||||
u32 test_ctl_hi_val;
|
||||
u32 alpha_val;
|
||||
bool enable_mn;
|
||||
};
|
||||
|
||||
struct pll_spm_ctrl {
|
||||
u32 offset;
|
||||
u32 event_bit;
|
||||
void __iomem *spm_base;
|
||||
};
|
||||
|
||||
#define PLL_FREQ_END (UINT_MAX-1)
|
||||
#define PLL_F_END { .freq_hz = PLL_FREQ_END }
|
||||
|
||||
/**
|
||||
* struct pll_vote_clk - phase locked loop (HW voteable)
|
||||
* @soft_vote: soft voting variable for multiple PLL software instances
|
||||
* @soft_vote_mask: soft voting mask for multiple PLL software instances
|
||||
* @en_reg: enable register
|
||||
* @en_mask: ORed with @en_reg to enable the clock
|
||||
* @status_mask: ANDed with @status_reg to determine if PLL is active.
|
||||
* @status_reg: status register
|
||||
* @c: clock
|
||||
*/
|
||||
struct pll_vote_clk {
|
||||
u32 *soft_vote;
|
||||
u32 soft_vote_mask;
|
||||
void __iomem *const en_reg;
|
||||
u32 en_mask;
|
||||
void __iomem *const status_reg;
|
||||
u32 status_mask;
|
||||
|
||||
struct clk c;
|
||||
|
||||
void *const __iomem *base;
|
||||
};
|
||||
|
||||
extern const struct clk_ops clk_ops_pll_vote;
|
||||
extern const struct clk_ops clk_ops_pll_acpu_vote;
|
||||
extern const struct clk_ops clk_ops_pll_sleep_vote;
|
||||
|
||||
/* Soft voting values */
|
||||
#define PLL_SOFT_VOTE_PRIMARY BIT(0)
|
||||
#define PLL_SOFT_VOTE_ACPU BIT(1)
|
||||
#define PLL_SOFT_VOTE_AUX BIT(2)
|
||||
|
||||
static inline struct pll_vote_clk *to_pll_vote_clk(struct clk *c)
|
||||
{
|
||||
return container_of(c, struct pll_vote_clk, c);
|
||||
}
|
||||
|
||||
/**
|
||||
* struct pll_clk - phase locked loop
|
||||
* @mode_reg: enable register
|
||||
* @l_reg: l value register
|
||||
* @m_reg: m value register
|
||||
* @n_reg: n value register
|
||||
* @config_reg: configuration register, contains mn divider enable, pre divider,
|
||||
* post divider and vco configuration. register name can be configure register
|
||||
* or user_ctl register depending on targets
|
||||
* @config_ctl_reg: "expert" configuration register
|
||||
* @config_ctl_hi_reg: upper 32 bits of the "expert" configuration register
|
||||
* @status_reg: status register, contains the lock detection bit
|
||||
* @init_test_ctl: initialize the test control register
|
||||
* @pgm_test_ctl_enable: program the test_ctl register in the enable sequence
|
||||
* @test_ctl_dbg: if false will configure the test control registers.
|
||||
* @masks: masks used for settings in config_reg
|
||||
* @vals: configuration values to be written to PLL registers
|
||||
* @freq_tbl: pll freq table
|
||||
* @no_prepared_reconfig: Fail round_rate if pll is prepared
|
||||
* @c: clk
|
||||
* @base: pointer to base address of ioremapped registers.
|
||||
*/
|
||||
struct pll_clk {
|
||||
void __iomem *const mode_reg;
|
||||
void __iomem *const l_reg;
|
||||
void __iomem *const m_reg;
|
||||
void __iomem *const n_reg;
|
||||
void __iomem *const alpha_reg;
|
||||
void __iomem *const config_reg;
|
||||
void __iomem *const config_ctl_reg;
|
||||
void __iomem *const config_ctl_hi_reg;
|
||||
void __iomem *const status_reg;
|
||||
void __iomem *const alt_status_reg;
|
||||
void __iomem *const test_ctl_lo_reg;
|
||||
void __iomem *const test_ctl_hi_reg;
|
||||
|
||||
bool init_test_ctl;
|
||||
bool pgm_test_ctl_enable;
|
||||
bool test_ctl_dbg;
|
||||
|
||||
struct pll_config_masks masks;
|
||||
struct pll_config_vals vals;
|
||||
struct pll_freq_tbl *freq_tbl;
|
||||
|
||||
unsigned long src_rate;
|
||||
unsigned long min_rate;
|
||||
unsigned long max_rate;
|
||||
|
||||
bool inited;
|
||||
bool no_prepared_reconfig;
|
||||
|
||||
struct pll_spm_ctrl spm_ctrl;
|
||||
struct clk c;
|
||||
|
||||
void *const __iomem *base;
|
||||
};
|
||||
|
||||
extern const struct clk_ops clk_ops_local_pll;
|
||||
extern const struct clk_ops clk_ops_sr2_pll;
|
||||
extern const struct clk_ops clk_ops_acpu_pll;
|
||||
extern const struct clk_ops clk_ops_variable_rate_pll;
|
||||
extern const struct clk_ops clk_ops_variable_rate_pll_hwfsm;
|
||||
|
||||
void __variable_rate_pll_init(struct clk *c);
|
||||
|
||||
static inline struct pll_clk *to_pll_clk(struct clk *c)
|
||||
{
|
||||
return container_of(c, struct pll_clk, c);
|
||||
}
|
||||
|
||||
int sr_pll_clk_enable(struct clk *c);
|
||||
int sr_hpm_lp_pll_clk_enable(struct clk *c);
|
||||
|
||||
struct pll_alt_config {
|
||||
u32 val;
|
||||
u32 mask;
|
||||
};
|
||||
|
||||
struct pll_config {
|
||||
u32 l;
|
||||
u32 m;
|
||||
u32 n;
|
||||
u32 vco_val;
|
||||
u32 vco_mask;
|
||||
u32 pre_div_val;
|
||||
u32 pre_div_mask;
|
||||
u32 post_div_val;
|
||||
u32 post_div_mask;
|
||||
u32 mn_ena_val;
|
||||
u32 mn_ena_mask;
|
||||
u32 main_output_val;
|
||||
u32 main_output_mask;
|
||||
u32 aux_output_val;
|
||||
u32 aux_output_mask;
|
||||
u32 cfg_ctl_val;
|
||||
/* SR2 PLL specific fields */
|
||||
u32 add_factor_val;
|
||||
u32 add_factor_mask;
|
||||
struct pll_alt_config alt_cfg;
|
||||
};
|
||||
|
||||
struct pll_config_regs {
|
||||
void __iomem *l_reg;
|
||||
void __iomem *m_reg;
|
||||
void __iomem *n_reg;
|
||||
void __iomem *config_reg;
|
||||
void __iomem *config_alt_reg;
|
||||
void __iomem *config_ctl_reg;
|
||||
void __iomem *mode_reg;
|
||||
|
||||
void *const __iomem *base;
|
||||
};
|
||||
|
||||
void configure_sr_pll(struct pll_config *config, struct pll_config_regs *regs,
|
||||
u32 ena_fsm_mode);
|
||||
void configure_sr_hpm_lp_pll(struct pll_config *config,
|
||||
struct pll_config_regs *regs, u32 ena_fsm_mode);
|
||||
#endif
|
||||
179
include/soc/qcom/clock-rpm.h
Normal file
179
include/soc/qcom/clock-rpm.h
Normal file
@@ -0,0 +1,179 @@
|
||||
/* Copyright (c) 2010-2015, 2017, The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 and
|
||||
* only version 2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#ifndef __ARCH_ARM_MACH_MSM_CLOCK_RPM_H
|
||||
#define __ARCH_ARM_MACH_MSM_CLOCK_RPM_H
|
||||
|
||||
#include <linux/clk/msm-clk-provider.h>
|
||||
#include <soc/qcom/rpm-smd.h>
|
||||
|
||||
#define RPM_SMD_KEY_RATE 0x007A484B
|
||||
#define RPM_SMD_KEY_ENABLE 0x62616E45
|
||||
#define RPM_SMD_KEY_STATE 0x54415453
|
||||
|
||||
#define RPM_CLK_BUFFER_A_REQ 0x616B6C63
|
||||
#define RPM_KEY_SOFTWARE_ENABLE 0x6E657773
|
||||
#define RPM_KEY_PIN_CTRL_CLK_BUFFER_ENABLE_KEY 0x62636370
|
||||
|
||||
struct clk_ops;
|
||||
struct clk_rpmrs_data;
|
||||
extern const struct clk_ops clk_ops_rpm;
|
||||
extern const struct clk_ops clk_ops_rpm_branch;
|
||||
|
||||
struct rpm_clk {
|
||||
int rpm_res_type;
|
||||
int rpm_key;
|
||||
int rpm_clk_id;
|
||||
const int rpm_status_id;
|
||||
bool active_only;
|
||||
bool enabled;
|
||||
bool branch; /* true: RPM only accepts 1 for ON and 0 for OFF */
|
||||
struct clk_rpmrs_data *rpmrs_data;
|
||||
struct rpm_clk *peer;
|
||||
struct clk c;
|
||||
uint32_t *last_active_set_vote;
|
||||
uint32_t *last_sleep_set_vote;
|
||||
};
|
||||
|
||||
static inline struct rpm_clk *to_rpm_clk(struct clk *clk)
|
||||
{
|
||||
return container_of(clk, struct rpm_clk, c);
|
||||
}
|
||||
|
||||
/*
|
||||
* RPM scaling enable function used for target that has an RPM resource for
|
||||
* rpm clock scaling enable.
|
||||
*/
|
||||
int enable_rpm_scaling(void);
|
||||
|
||||
int vote_bimc(struct rpm_clk *r, uint32_t value);
|
||||
|
||||
extern struct clk_rpmrs_data clk_rpmrs_data_smd;
|
||||
|
||||
/*
|
||||
* A note on name##last_{active,sleep}_set_vote below:
|
||||
* We track the last active and sleep set votes across both
|
||||
* active-only and active+sleep set clocks. We use the same
|
||||
* tracking variables for both clocks in order to keep both
|
||||
* updated about the last vote irrespective of which clock
|
||||
* actually made the request. This is the only way to allow
|
||||
* optimizations that prevent duplicate requests from being sent
|
||||
* to the RPM. Separate tracking does not work since it is not
|
||||
* possible to know if the peer's last request was actually sent
|
||||
* to the RPM.
|
||||
*/
|
||||
|
||||
#define __DEFINE_CLK_RPM(name, active, type, r_id, stat_id, dep, key, \
|
||||
rpmrsdata) \
|
||||
static struct rpm_clk active; \
|
||||
static uint32_t name##last_active_set_vote; \
|
||||
static uint32_t name##last_sleep_set_vote; \
|
||||
static struct rpm_clk name = { \
|
||||
.rpm_res_type = (type), \
|
||||
.rpm_clk_id = (r_id), \
|
||||
.rpm_status_id = (stat_id), \
|
||||
.rpm_key = (key), \
|
||||
.peer = &active, \
|
||||
.rpmrs_data = (rpmrsdata),\
|
||||
.last_active_set_vote = &name##last_active_set_vote, \
|
||||
.last_sleep_set_vote = &name##last_sleep_set_vote, \
|
||||
.c = { \
|
||||
.ops = &clk_ops_rpm, \
|
||||
.dbg_name = #name, \
|
||||
CLK_INIT(name.c), \
|
||||
.depends = dep, \
|
||||
}, \
|
||||
}; \
|
||||
static struct rpm_clk active = { \
|
||||
.rpm_res_type = (type), \
|
||||
.rpm_clk_id = (r_id), \
|
||||
.rpm_status_id = (stat_id), \
|
||||
.rpm_key = (key), \
|
||||
.peer = &name, \
|
||||
.active_only = true, \
|
||||
.rpmrs_data = (rpmrsdata),\
|
||||
.last_active_set_vote = &name##last_active_set_vote, \
|
||||
.last_sleep_set_vote = &name##last_sleep_set_vote, \
|
||||
.c = { \
|
||||
.ops = &clk_ops_rpm, \
|
||||
.dbg_name = #active, \
|
||||
CLK_INIT(active.c), \
|
||||
.depends = dep, \
|
||||
}, \
|
||||
} \
|
||||
|
||||
#define __DEFINE_CLK_RPM_BRANCH(name, active, type, r_id, stat_id, r, \
|
||||
key, rpmrsdata) \
|
||||
static struct rpm_clk active; \
|
||||
static uint32_t name##last_active_set_vote; \
|
||||
static uint32_t name##last_sleep_set_vote; \
|
||||
static struct rpm_clk name = { \
|
||||
.rpm_res_type = (type), \
|
||||
.rpm_clk_id = (r_id), \
|
||||
.rpm_status_id = (stat_id), \
|
||||
.rpm_key = (key), \
|
||||
.peer = &active, \
|
||||
.branch = true, \
|
||||
.rpmrs_data = (rpmrsdata),\
|
||||
.last_active_set_vote = &name##last_active_set_vote, \
|
||||
.last_sleep_set_vote = &name##last_sleep_set_vote, \
|
||||
.c = { \
|
||||
.ops = &clk_ops_rpm_branch, \
|
||||
.dbg_name = #name, \
|
||||
.rate = (r), \
|
||||
CLK_INIT(name.c), \
|
||||
}, \
|
||||
}; \
|
||||
static struct rpm_clk active = { \
|
||||
.rpm_res_type = (type), \
|
||||
.rpm_clk_id = (r_id), \
|
||||
.rpm_status_id = (stat_id), \
|
||||
.rpm_key = (key), \
|
||||
.peer = &name, \
|
||||
.active_only = true, \
|
||||
.branch = true, \
|
||||
.rpmrs_data = (rpmrsdata),\
|
||||
.last_active_set_vote = &name##last_active_set_vote, \
|
||||
.last_sleep_set_vote = &name##last_sleep_set_vote, \
|
||||
.c = { \
|
||||
.ops = &clk_ops_rpm_branch, \
|
||||
.dbg_name = #active, \
|
||||
.rate = (r), \
|
||||
CLK_INIT(active.c), \
|
||||
}, \
|
||||
} \
|
||||
|
||||
#define DEFINE_CLK_RPM_SMD(name, active, type, r_id, dep) \
|
||||
__DEFINE_CLK_RPM(name, active, type, r_id, 0, dep, \
|
||||
RPM_SMD_KEY_RATE, &clk_rpmrs_data_smd)
|
||||
|
||||
#define DEFINE_CLK_RPM_SMD_BRANCH(name, active, type, r_id, r) \
|
||||
__DEFINE_CLK_RPM_BRANCH(name, active, type, r_id, 0, r, \
|
||||
RPM_SMD_KEY_ENABLE, &clk_rpmrs_data_smd)
|
||||
|
||||
#define DEFINE_CLK_RPM_SMD_QDSS(name, active, type, r_id) \
|
||||
__DEFINE_CLK_RPM(name, active, type, r_id, \
|
||||
0, 0, RPM_SMD_KEY_STATE, &clk_rpmrs_data_smd)
|
||||
/*
|
||||
* The RPM XO buffer clock management code aggregates votes for pin-control mode
|
||||
* and software mode separately. Software-enable has higher priority over pin-
|
||||
* control, and if the software-mode aggregation results in a 'disable', the
|
||||
* buffer will be left in pin-control mode if a pin-control vote is in place.
|
||||
*/
|
||||
#define DEFINE_CLK_RPM_SMD_XO_BUFFER(name, active, r_id) \
|
||||
__DEFINE_CLK_RPM_BRANCH(name, active, RPM_CLK_BUFFER_A_REQ, r_id, 0, \
|
||||
1000, RPM_KEY_SOFTWARE_ENABLE, &clk_rpmrs_data_smd)
|
||||
|
||||
#define DEFINE_CLK_RPM_SMD_XO_BUFFER_PINCTRL(name, active, r_id) \
|
||||
__DEFINE_CLK_RPM_BRANCH(name, active, RPM_CLK_BUFFER_A_REQ, r_id, 0, \
|
||||
1000, RPM_KEY_PIN_CTRL_CLK_BUFFER_ENABLE_KEY, &clk_rpmrs_data_smd)
|
||||
#endif
|
||||
50
include/soc/qcom/clock-voter.h
Normal file
50
include/soc/qcom/clock-voter.h
Normal file
@@ -0,0 +1,50 @@
|
||||
/* Copyright (c) 2010-2013, 2017, The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 and
|
||||
* only version 2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#ifndef __ARCH_ARM_MACH_MSM_CLOCK_VOTER_H
|
||||
#define __ARCH_ARM_MACH_MSM_CLOCK_VOTER_H
|
||||
|
||||
#include <linux/clk/msm-clk-provider.h>
|
||||
|
||||
struct clk_ops;
|
||||
extern const struct clk_ops clk_ops_voter;
|
||||
|
||||
struct clk_voter {
|
||||
int is_branch;
|
||||
bool enabled;
|
||||
struct clk c;
|
||||
};
|
||||
|
||||
static inline struct clk_voter *to_clk_voter(struct clk *clk)
|
||||
{
|
||||
return container_of(clk, struct clk_voter, c);
|
||||
}
|
||||
|
||||
#define __DEFINE_CLK_VOTER(clk_name, _parent, _default_rate, _is_branch) \
|
||||
struct clk_voter clk_name = { \
|
||||
.is_branch = (_is_branch), \
|
||||
.c = { \
|
||||
.parent = _parent, \
|
||||
.dbg_name = #clk_name, \
|
||||
.ops = &clk_ops_voter, \
|
||||
.rate = _default_rate, \
|
||||
CLK_INIT(clk_name.c), \
|
||||
}, \
|
||||
}
|
||||
|
||||
#define DEFINE_CLK_VOTER(clk_name, _parent, _default_rate) \
|
||||
__DEFINE_CLK_VOTER(clk_name, _parent, _default_rate, 0)
|
||||
|
||||
#define DEFINE_CLK_BRANCH_VOTER(clk_name, _parent) \
|
||||
__DEFINE_CLK_VOTER(clk_name, _parent, 1000, 1)
|
||||
|
||||
#endif
|
||||
142
include/soc/qcom/msm-clock-controller.h
Normal file
142
include/soc/qcom/msm-clock-controller.h
Normal file
@@ -0,0 +1,142 @@
|
||||
/* Copyright (c) 2014, 2017, The Linux Foundation. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 and
|
||||
* only version 2 as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#ifndef __ARCH_ARM_MSM_CLOCK_CONTROLLER_H
|
||||
#define __ARCH_ARM_MSM_CLOCK_CONTROLLER_H
|
||||
|
||||
#include <linux/list.h>
|
||||
#include <linux/clkdev.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
||||
#define dt_err(np, fmt, ...) \
|
||||
pr_err("%s: " fmt, np->name, ##__VA_ARGS__)
|
||||
#define dt_prop_err(np, str, fmt, ...) \
|
||||
dt_err(np, "%s: " fmt, str, ##__VA_ARGS__)
|
||||
|
||||
/**
|
||||
* struct msmclk_parser
|
||||
* @compatible
|
||||
* matches compatible property from devicetree
|
||||
* @parsedt
|
||||
* constructs & returns an instance of the appropriate obj based on
|
||||
* the data from devicetree.
|
||||
*/
|
||||
struct msmclk_parser {
|
||||
struct list_head list;
|
||||
char *compatible;
|
||||
void * (*parsedt)(struct device *dev, struct device_node *of);
|
||||
};
|
||||
|
||||
#define MSMCLK_PARSER(fn, str, id) \
|
||||
static struct msmclk_parser _msmclk_##fn##id = { \
|
||||
.list = LIST_HEAD_INIT(_msmclk_##fn##id.list), \
|
||||
.compatible = str, \
|
||||
.parsedt = fn, \
|
||||
}; \
|
||||
static int __init _msmclk_init_##fn##id(void) \
|
||||
{ \
|
||||
msmclk_parser_register(&_msmclk_##fn##id); \
|
||||
return 0; \
|
||||
} \
|
||||
early_initcall(_msmclk_init_##fn##id)
|
||||
|
||||
/*
|
||||
* struct msmclk_data
|
||||
* @base
|
||||
* ioremapped region for sub_devices
|
||||
* @list
|
||||
* tracks all registered driver instances
|
||||
* @htable
|
||||
* tracks all registered child clocks
|
||||
* @clk_tbl
|
||||
* array of clk_lookup to be registered with the clock framework
|
||||
*/
|
||||
#define HASHTABLE_SIZE 200
|
||||
struct msmclk_data {
|
||||
void __iomem *base;
|
||||
struct device *dev;
|
||||
struct list_head list;
|
||||
struct hlist_head htable[HASHTABLE_SIZE];
|
||||
struct clk_lookup *clk_tbl;
|
||||
int clk_tbl_size;
|
||||
int max_clk_tbl_size;
|
||||
};
|
||||
|
||||
#if defined(CONFIG_MSM_CLK_CONTROLLER_V2)
|
||||
|
||||
/* Utility functions */
|
||||
int of_property_count_phandles(struct device_node *np, char *propname);
|
||||
int of_property_read_phandle_index(struct device_node *np, char *propname,
|
||||
int index, phandle *p);
|
||||
void *msmclk_generic_clk_init(struct device *dev, struct device_node *np,
|
||||
struct clk *c);
|
||||
|
||||
/*
|
||||
* msmclk_parser_register
|
||||
* Registers a parser which will be matched with a node from dt
|
||||
* according to the compatible string.
|
||||
*/
|
||||
void msmclk_parser_register(struct msmclk_parser *p);
|
||||
|
||||
/*
|
||||
* msmclk_parse_phandle
|
||||
* On hashtable miss, the corresponding entry will be retrieved from
|
||||
* devicetree, and added to the hashtable.
|
||||
*/
|
||||
void *msmclk_parse_phandle(struct device *dev, phandle key);
|
||||
/*
|
||||
* msmclk_lookup_phandle
|
||||
* Straightforward hashtable lookup
|
||||
*/
|
||||
void *msmclk_lookup_phandle(struct device *dev, phandle key);
|
||||
|
||||
int __init msmclk_init(void);
|
||||
#else
|
||||
|
||||
static inline int of_property_count_phandles(struct device_node *np,
|
||||
char *propname)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int of_property_read_phandle_index(struct device_node *np,
|
||||
char *propname, int index, phandle *p)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void *msmclk_generic_clk_init(struct device *dev,
|
||||
struct device_node *np, struct clk *c)
|
||||
{
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
static inline void msmclk_parser_register(struct msmclk_parser *p) {};
|
||||
|
||||
static inline void *msmclk_parse_phandle(struct device *dev, phandle key)
|
||||
{
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
static inline void *msmclk_lookup_phandle(struct device *dev, phandle key)
|
||||
{
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
static inline int __init msmclk_init(void)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
#endif /* CONFIG_MSM_CLK_CONTROLLER_V2 */
|
||||
#endif /* __ARCH_ARM_MSM_CLOCK_CONTROLLER_H */
|
||||
@@ -325,6 +325,7 @@ DEFINE_EVENT(wakeup_source, wakeup_source_deactivate,
|
||||
* The clock events are used for clock enable/disable and for
|
||||
* clock rate change
|
||||
*/
|
||||
#if defined(CONFIG_COMMON_CLK_MSM)
|
||||
DECLARE_EVENT_CLASS(clock,
|
||||
|
||||
TP_PROTO(const char *name, unsigned int state, unsigned int cpu_id),
|
||||
@@ -368,6 +369,13 @@ DEFINE_EVENT(clock, clock_set_rate,
|
||||
TP_ARGS(name, state, cpu_id)
|
||||
);
|
||||
|
||||
DEFINE_EVENT(clock, clock_set_rate_complete,
|
||||
|
||||
TP_PROTO(const char *name, unsigned int state, unsigned int cpu_id),
|
||||
|
||||
TP_ARGS(name, state, cpu_id)
|
||||
);
|
||||
|
||||
TRACE_EVENT(clock_set_parent,
|
||||
|
||||
TP_PROTO(const char *name, const char *parent_name),
|
||||
@@ -387,6 +395,32 @@ TRACE_EVENT(clock_set_parent,
|
||||
TP_printk("%s parent=%s", __get_str(name), __get_str(parent_name))
|
||||
);
|
||||
|
||||
TRACE_EVENT(clock_state,
|
||||
|
||||
TP_PROTO(const char *name, unsigned long prepare_count,
|
||||
unsigned long count, unsigned long rate),
|
||||
|
||||
TP_ARGS(name, prepare_count, count, rate),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__string(name, name)
|
||||
__field(unsigned long, prepare_count)
|
||||
__field(unsigned long, count)
|
||||
__field(unsigned long, rate)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__assign_str(name, name);
|
||||
__entry->prepare_count = prepare_count;
|
||||
__entry->count = count;
|
||||
__entry->rate = rate;
|
||||
),
|
||||
TP_printk("%s\t[%lu:%lu]\t%lu", __get_str(name), __entry->prepare_count,
|
||||
__entry->count, __entry->rate)
|
||||
|
||||
);
|
||||
#endif /* CONFIG_COMMON_CLK_MSM */
|
||||
|
||||
/*
|
||||
* The power domain events are used for power domains transitions
|
||||
*/
|
||||
|
||||
Reference in New Issue
Block a user