Power Management Device Latencies Measurement

From OMAPpedia

(Difference between revisions)
Jump to: navigation, search
(HW and SW measurements results)
(C1 performance problem: analysis)
 
(6 intermediate revisions not shown)
Line 404: Line 404:
* Measure and add figures for OMAP4
* Measure and add figures for OMAP4
* Correct some numbers when sys_clkreq and sys_offmode are supported
* Correct some numbers when sys_clkreq and sys_offmode are supported
 +
 +
== C1 performance problem: analysis ==
 +
A serious performance degradation has been noticed during transfers from the NAND device using DMA, cf. http://marc.info/?l=linux-omap&m=133467316214021&w=2 for detailed discussion and patches [1]->[6].
 +
The C1 C-state has a very big latency and degrades the use case performance.
 +
 +
===Setup===
 +
* Beagleboard (OMAP3530) at 500MHz,
 +
* l-o master kernel + functional power states + per-device PM QoS. It has been checked that the changes from l-o master do not have an impact on the performance.
 +
* The data transfer is performed using dd from a file in JFFS2 to /dev/null: 'dd if=/tmp/mnt/a of=/dev/null bs=1M count=32'.
 +
 +
===Results===
 +
Here are the results on Beagleboard:
 +
* Without using DMA: 4.7MB/s,
 +
* Using DMA
 +
{|border="1"
 +
!Patches applied
 +
!Description
 +
!Measured BW
 +
|-
 +
|[0]
 +
|Initial code
 +
|2.1MB/s
 +
|-
 +
|[1]
 +
|C1 Only
 +
|2.1MB/s
 +
|-
 +
|[1]+[2]
 +
|No pre_ post_
 +
|2.6MB/s
 +
|-
 +
|[1]+[5]
 +
|No pwrdm_for_each_clkdm
 +
|2.3MB/s
 +
|-
 +
|[1]+[5]+[2]
 +
|
 +
|2.8MB/s
 +
|-
 +
|[1]+[7]
 +
|Regs cache (invalidate current states after WFI, invalidate prev states in clear_all_prev_pwrst)
 +
!2.2MB/s
 +
|-
 +
|[1]+[7]+[8]
 +
|khilman's optimizations on pre_ post_ transitions + Regs cache
 +
!2.6MB/s
 +
|-
 +
|[1]+[7]+[8]+[9]
 +
|per=core in C1 + khilman's optimizations on pre_ post_ transitions + Regs cache
 +
!2.8MB/s
 +
|-
 +
|[1]+[7]+[8]+[9]+[10]
 +
|allow/deny_idle on pwrdm->clkdm[0] for mpu, core + per=core in C1 + khilman's optimizations on pre_ post_ transitions + Regs cache
 +
!3.0MB/s
 +
|-
 +
|[1]+[8]+[9]+[10]
 +
|allow/deny_idle on pwrdm->clkdm[0] for mpu, core + per>=core in C1 + khilman's optimizations on pre_ post_ transitions
 +
!3.1MB/s
 +
|-
 +
|[1]+[5]+[6]
 +
|No omap_sram_idle
 +
|3.1MB/s
 +
|-
 +
|[1]+[5]+[6]+No IDLE
 +
|No CPUIDLE, No omap_sram_idle, all pwrdms to ON
 +
|3.1MB/s
 +
|-
 +
|}
 +
 +
This shows there is some serious performance issue with the C1 C-state but also that the patches [7]->[10] are providing some solutions.
 +
 +
Notes:
 +
* Patches for [7] are at http://marc.info/?l=linux-omap&m=133587781712039&w=2
 +
* Patches for [8] are at http://marc.info/?l=linux-omap&m=133527749024432&w=2
 +
* Patch for [9] is at http://marc.info/?l=linux-omap&m=133656106811605&w=2
 +
* Patch for [10] is at http://marc.info/?l=linux-omap&m=133656106911606&w=2
 +
 +
===Main contributors===
 +
Here are the contributors inside __omap3_enter_idle (averaged over 30 samples):
 +
 +
[[File:Omap sram idle latency table.png|center|thumb|540px]]
 +
 +
The main contributors are:
 +
* (140us) pwrdm_pre_transition and pwrdm_post_transition,
 +
* (105us) omap2_gpio_prepare_for_idle and omap2_gpio_resume_after_idle. This could be avoided if PER stays ON in the latency-critical C-states,
 +
* (78us) pwrdm_for_each_clkdm(mpu, core, deny_idle/allow_idle),
 +
* (33us estimated) omap_set_pwrdm_state(mpu, core, neon),
 +
* (11 us) clkdm_allow_idle(mpu). Is this needed?
 +
 +
The HW idle time is 6.5us, which is neglectable compared to the SW overhead required to reach the idle state.
 +
 +
===Use case idle stats===
 +
Using only the cpuidle tracepoints, the average times in idle are (averaged over 60 samples):
 +
{|border="1"
 +
!Use case
 +
!Measured BW MB/s
 +
!Description
 +
!Idle us
 +
!Active us
 +
|-
 +
|[1]
 +
|2.1
 +
|Cpuidle, omap_sram_idle, only C1
 +
!397
 +
|394
 +
|-
 +
|[1]+[7]
 +
|2.2
 +
|Regs cache, only C1
 +
!349
 +
|397
 +
|-
 +
|[1]+[7]+[8]
 +
|2.6
 +
|khilman optims + Regs cache, only C1
 +
!246
 +
|364
 +
|-
 +
|[1]+[7]+[8]+[9]+[10]
 +
|3.0
 +
|allow/deny_idle on pwrdm->clkdm[0] for mpu, core + pre=core in C1 + khilman's optims + Regs cache, only C1
 +
!178
 +
|403
 +
|-
 +
|[1]+[8]+[9]+[10]
 +
|3.1
 +
|allow/deny_idle on pwrdm->clkdm[0] for mpu, core + per>=core in C1 + khilman's optims, only C1
 +
!152
 +
|259
 +
|-
 +
|[1]+[5]+[6]+No IDLE
 +
|3.1
 +
|No cpuidle, no omap_sram_idle, all pwrdms to ON
 +
!113
 +
|477
 +
|-
 +
|}
 +
 +
Notes:
 +
* From the above stats, the average latencies in C1 (397us;349us;246us;178us) exceed the idle duration without cpuidle (113us), hence the performance degradation.
 +
* The registers cache optimizes the low power mode transitions, but is not sufficient to obtain a big gain. A few unused domains are transitioning, which causes a big penalty in the idle path.
 +
* khilman's optimizations are really helpful. Furthermore it optimizes farther the registers cache statistics accesses.
 +
* The [1]+[8]+[9]+[10] combination brings the performance close to the non CPUIDLE case (no IDLE, no omap_sram_idle, all pwrdms to ON).
 +
 +
===Registers cache accesses stats===
 +
The number of registers accesses are shown in PM debug using a registers cache statistics patch.
 +
The debug log shows the big number of accesses in this optimized use case ([0]+[7]+[8]) and the cache efficiency:
 +
 +
/ # cat /debug/pm_debug/count
 +
usbhost_pwrdm (ON),OFF:719,OSWR:0,CSWR:578,INA:0,ON:1298,RET-LOGIC-OFF:0,RET-MEMBANK1-OFF:0. Cache access 8279, hit 29, rate 0%
 +
sgx_pwrdm (OFF),OFF:1,OSWR:0,CSWR:0,INA:0,ON:1,RET-LOGIC-OFF:0,RET-MEMBANK1-OFF:0. Cache access 8275, hit 26, rate 0%
 +
core_pwrdm (ON),OFF:19,OSWR:0,CSWR:20,INA:14,ON:54,RET-LOGIC-OFF:0,RET-MEMBANK1-OFF:0,RET-MEMBANK2-OFF:0. Cache access 14960, hit 3966, rate 26%
 +
per_pwrdm (ON),OFF:33,OSWR:0,CSWR:20,INA:0,ON:54,RET-LOGIC-OFF:0,RET-MEMBANK1-OFF:0. Cache access 18811, hit 7817, rate 41%
 +
dss_pwrdm (ON),OFF:719,OSWR:0,CSWR:578,INA:0,ON:1298,RET-LOGIC-OFF:0,RET-MEMBANK1-OFF:0. Cache access 8279, hit 29, rate 0%
 +
cam_pwrdm (OFF),OFF:1,OSWR:0,CSWR:1,INA:0,ON:1,RET-LOGIC-OFF:0,RET-MEMBANK1-OFF:0. Cache access 10907, hit 2657, rate 24%
 +
neon_pwrdm (ON),OFF:19,OSWR:0,CSWR:1271,INA:7,ON:1298,RET-LOGIC-OFF:0. Cache access 12611, hit 2885, rate 22%
 +
mpu_pwrdm (ON),OFF:19,OSWR:0,CSWR:1271,INA:7,ON:1298,RET-LOGIC-OFF:0,RET-MEMBANK1-OFF:0. Cache access 18390, hit 7396, rate 40%
 +
iva2_pwrdm (OFF),OFF:1,OSWR:0,CSWR:1,INA:0,ON:1,RET-LOGIC-OFF:0,RET-MEMBANK1-OFF:0,RET-MEMBANK2-OFF:0,RET-MEMBANK3-OFF:0,RET-MEMBANK4-OFF:0. Cache access 8281, hit 31, rate 0%
 +
usbhost_clkdm->usbhost_pwrdm (1)
 +
sgx_clkdm->sgx_pwrdm (0)
 +
per_clkdm->per_pwrdm (16)
 +
cam_clkdm->cam_pwrdm (0)
 +
dss_clkdm->dss_pwrdm (1)
 +
core_l4_clkdm->core_pwrdm (21)
 +
core_l3_clkdm->core_pwrdm (4)
 +
d2d_clkdm->core_pwrdm (0)
 +
iva2_clkdm->iva2_pwrdm (0)
 +
neon_clkdm->neon_pwrdm (0)
 +
mpu_clkdm->mpu_pwrdm (0)
 +
prm_clkdm->wkup_pwrdm (0)
 +
cm_clkdm->core_pwrdm (0)
 +
/ #
 +
 +
It can be noted that some power domains have a cache hit rate of 0%, because they are unused (i.e. not controlled by any driver). Still a lot of registers accesses are performed in the idle path.
 +
 +
Here is the statistics patch:
 +
diff --git a/arch/arm/mach-omap2/pm-debug.c b/arch/arm/mach-omap2/pm-debug.c
 +
index ed9846e..632db47 100644
 +
--- a/arch/arm/mach-omap2/pm-debug.c
 +
+++ b/arch/arm/mach-omap2/pm-debug.c
 +
@@ -119,6 +119,9 @@ static int pwrdm_dbg_show_counter(struct powerdomain *pwrdm, void *user)
 +
seq_printf(s, ",RET-MEMBANK%d-OFF:%d", i + 1,
 +
pwrdm->ret_mem_off_counter[i]);
 +
 +
+ seq_printf(s, ". Cache access %d, hit %d, rate %d%%",
 +
+ pwrdm->pwrdm_cache_access, pwrdm->pwrdm_cache_hit,
 +
+ (100 * pwrdm->pwrdm_cache_hit)/pwrdm->pwrdm_cache_access);
 +
seq_printf(s, "\n");
 +
 +
return 0;
 +
diff --git a/arch/arm/mach-omap2/powerdomain.c b/arch/arm/mach-omap2/powerdomain.c
 +
index 537595c..1b80ee9 100644
 +
--- a/arch/arm/mach-omap2/powerdomain.c
 +
+++ b/arch/arm/mach-omap2/powerdomain.c
 +
@@ -711,10 +711,12 @@ static int pwrdm_cache_read(struct powerdomain *pwrdm, int index, int *value)
 +
if (index >= PWRDM_CACHE_SIZE)
 +
return -EINVAL;
 +
 +
+ pwrdm->pwrdm_cache_access++;
 +
if (!(pwrdm->cache_state & (1 << index)))
 +
return -ENODATA;
 +
 +
*value = pwrdm->cache[index];
 +
+ pwrdm->pwrdm_cache_hit++;
 +
return 0;
 +
}
 +
 +
diff --git a/arch/arm/mach-omap2/powerdomain.h b/arch/arm/mach-omap2/powerdomain.h
 +
index 92386bd..a9eae1c 100644
 +
--- a/arch/arm/mach-omap2/powerdomain.h
 +
+++ b/arch/arm/mach-omap2/powerdomain.h
 +
@@ -172,6 +172,10 @@ struct powerdomain {
 +
struct mutex lock;
 +
int state;
 +
int cache[PWRDM_CACHE_SIZE];
 +
+
 +
+ int pwrdm_cache_access;
 +
+ int pwrdm_cache_hit;
 +
+
 +
long cache_state;
 +
unsigned state_counter[PWRDM_MAX_FUNC_PWRSTS];
 +
unsigned ret_logic_off_counter;
==Links==
==Links==
===Device latency patches===
===Device latency patches===
-
[http://marc.info/?l=linux-omap&m=131350222432043&w=2 PM QoS device constraint code patches]
+
[http://marc.info/?l=linux-omap&m=133475685213067&w=2 PM QoS device constraint code patches]
[http://omappedia.org/wiki/TWL4030_power_scripts T2 scripts information page]
[http://omappedia.org/wiki/TWL4030_power_scripts T2 scripts information page]
Line 415: Line 637:
[[File:OMAP_latency_measurements_patches_and_config.tar.gz|center|thumb|320px]]
[[File:OMAP_latency_measurements_patches_and_config.tar.gz|center|thumb|320px]]
-
--[[User:Jpihet|Jpihet]] 2 Sep 2011
+
==Presentation slides: Fosdem/ELC 2012==
 +
[[File:ELC-2012-jpihet-DeviceLatencyModel.pdf]]
 +
 
 +
--[[User:Jpihet|Jpihet]] 24 Apr 2012

Latest revision as of 11:09, 9 May 2012

Contents

[edit] PM Devices constraintes measurements

[edit] Introduction

To correctly implement the device latency constraint support it is needed to get accurate measurements of the system low power modes overhead:

This wiki page details the measurements setup and the results. The latency data is to be fed into the constraints latency patches.

[edit] Kernel patches & build

Some kernel changes are required for the kernel instrumentation. The patches and config are attached to this page.

Changes: enable IDLE, DSS for Beagle, Initramfs Busybox root FS

[edit] HW traces details

The trace points are connected on Beagleboard rev B7.

!Warning! The HW power supplies and external clocks are not cut off in this config (no support for System OFF in l-o), so the HW latencies are lower than expected. The HW measurements need to be performed as soon as l-o supports the System OFF. The measurements from TI are used for the real HW latency.

Here are some scope screenshots showing the time delta between the wake-up event (USER button press, trace A) and the end of omap_sram_idle (USR1 Led).

For RET mode, showing a delta of 408us:

Scope capture ret.jpg

For OFF mode, showing a delta of 2700us:

Scope capture off.jpg

[edit] GPT tracer

Since GPT12 is used as a wake-up source from the idle mode, it can be used to track the timings during the wake-up sequence. A patch is needed to let the timer count after it overflowed and woke up the system.

The GPT runs on 32KHz clock and so the resolution is limited to 30.518us. Given the latencies to measure for OFF mode, the resolution is accpetable.

4 GPT measurements are performed during the wake-up:

[edit] SW trace usage

Enable the power events and dump the trace:

# echo 1 > /debug/tracing/events/power/enable
# cat /debug/tracing/trace_pipe &

Enable the system idle in RET mode:

# echo 5 > /sys/devices/platform/omap/omap-hsuart.0/sleep_timeout 
# echo 5 > /sys/devices/platform/omap/omap-hsuart.1/sleep_timeout 
# echo 5 > /sys/devices/platform/omap/omap-hsuart.2/sleep_timeout 

# echo 0 > /debug/pm_debug/enable_off_mode
# echo 1 > /debug/pm_debug/sleep_while_idle

Trace output:

[   62.311462] *** GPT12 wake-up (HW wake-up, ASM restore, delta trace1-7): 183, 0, 244 us       => Dump of GPT timing deltas
          <idle>-0     [000]    62.241608: power_start: type=1 state=1 cpu_id=0                  => Idle start
          <idle>-0     [000]    62.241608: power_start: type=4 state=1 cpu_id=0                  => First suspend SW trace in omap_sram_idle
          <idle>-0     [000]    62.241638: power_start: type=4 state=2 cpu_id=0                  => ...
          <idle>-0     [000]    62.241669: power_start: type=4 state=3 cpu_id=0
          <idle>-0     [000]    62.241699: power_domain_target: name=neon_pwrdm state=1 cpu_id=0
          <idle>-0     [000]    62.241699: power_start: type=4 state=4 cpu_id=0
          <idle>-0     [000]    62.241699: clock_disable: name=uart3_fck state=0 cpu_id=0
          <idle>-0     [000]    62.241730: power_start: type=4 state=5 cpu_id=0
          <idle>-0     [000]    62.241730: clock_disable: name=uart1_fck state=0 cpu_id=0
          <idle>-0     [000]    62.241730: clock_disable: name=uart2_fck state=0 cpu_id=0
          <idle>-0     [000]    62.241760: power_start: type=4 state=6 cpu_id=0
          <idle>-0     [000]    62.241760: power_start: type=4 state=7 cpu_id=0
          <idle>-0     [000]    62.241760: power_start: type=4 state=8 cpu_id=0                  => Last suspend SW trace in omap_sram_idle
          <idle>-0     [000]    62.311188: power_start: type=5 state=1 cpu_id=0                  => First resume SW trace in omap_sram_idle
          <idle>-0     [000]    62.311188: power_start: type=5 state=2 cpu_id=0                  => ...
          <idle>-0     [000]    62.311188: power_start: type=5 state=3 cpu_id=0
          <idle>-0     [000]    62.311188: power_start: type=5 state=4 cpu_id=0
          <idle>-0     [000]    62.311218: clock_enable: name=uart1_fck state=1 cpu_id=0
          <idle>-0     [000]    62.311310: clock_enable: name=uart2_fck state=1 cpu_id=0
          <idle>-0     [000]    62.311310: power_start: type=5 state=5 cpu_id=0
          <idle>-0     [000]    62.311340: clock_enable: name=uart3_fck state=1 cpu_id=0
          <idle>-0     [000]    62.311340: power_start: type=5 state=6 cpu_id=0
          <idle>-0     [000]    62.311432: power_start: type=5 state=7 cpu_id=0                  => Last resume SW trace in omap_sram_idle
          <idle>-0     [000]    62.311462: power_end: cpu_id=0                                   => Idle end

Enable the system idle in OFF mode:

# echo 5 > /sys/devices/platform/omap/omap-hsuart.0/sleep_timeout 
# echo 5 > /sys/devices/platform/omap/omap-hsuart.1/sleep_timeout 
# echo 5 > /sys/devices/platform/omap/omap-hsuart.2/sleep_timeout 

# echo 1 > /debug/pm_debug/enable_off_mode
# echo 1 > /debug/pm_debug/sleep_while_idle

Trace output:

/ # echo 1 > /debug/pm_debug/enable_off_mode
/ #           
              sh-503   [000]    70.862366: power_domain_target: name=iva2_pwrdm state=0 cpu_id=0
              sh-503   [000]    70.862396: power_domain_target: name=mpu_pwrdm state=0 cpu_id=0
              sh-503   [000]    70.862396: power_domain_target: name=neon_pwrdm state=0 cpu_id=0
              sh-503   [000]    70.862396: power_domain_target: name=core_pwrdm state=0 cpu_id=0
              sh-503   [000]    70.862427: power_domain_target: name=cam_pwrdm state=0 cpu_id=0
              sh-503   [000]    70.862457: power_domain_target: name=dss_pwrdm state=0 cpu_id=0
              sh-503   [000]    70.862488: power_domain_target: name=per_pwrdm state=0 cpu_id=0
              sh-503   [000]    70.862488: power_domain_target: name=usbhost_pwrdm state=0 cpu_id=0
/ # 
[  557.240020] *** GPT12 wake-up (HW wake-up, ASM restore, delta trace1-7): 1495, 915, 488 us    => Dump of GPT timing deltas
          <idle>-0     [000]   557.156769: power_start: type=1 state=1 cpu_id=0                  => Idle start
          <idle>-0     [000]   557.156769: power_start: type=4 state=1 cpu_id=0                  => First suspend SW trace in omap_sram_idle
          <idle>-0     [000]   557.156769: power_start: type=4 state=2 cpu_id=0                  => ...
          <idle>-0     [000]   557.156830: power_start: type=4 state=3 cpu_id=0
          <idle>-0     [000]   557.156830: power_domain_target: name=neon_pwrdm state=0 cpu_id=0
          <idle>-0     [000]   557.156830: power_start: type=4 state=4 cpu_id=0
          <idle>-0     [000]   557.156860: clock_disable: name=uart3_fck state=0 cpu_id=0
          <idle>-0     [000]   557.156891: power_start: type=4 state=5 cpu_id=0
          <idle>-0     [000]   557.156891: clock_disable: name=uart1_fck state=0 cpu_id=0
          <idle>-0     [000]   557.156921: clock_disable: name=uart2_fck state=0 cpu_id=0
          <idle>-0     [000]   557.157013: power_start: type=4 state=6 cpu_id=0
          <idle>-0     [000]   557.157013: power_start: type=4 state=7 cpu_id=0
          <idle>-0     [000]   557.157898: power_start: type=4 state=8 cpu_id=0                  => Last suspend SW trace in omap_sram_idle
          <idle>-0     [000]   557.236084: power_start: type=5 state=1 cpu_id=0                  => First resume SW trace in omap_sram_idle
          <idle>-0     [000]   557.236145: power_start: type=5 state=2 cpu_id=0                  => ...
          <idle>-0     [000]   557.236206: power_start: type=5 state=3 cpu_id=0
          <idle>-0     [000]   557.236267: power_start: type=5 state=4 cpu_id=0
          <idle>-0     [000]   557.236389: clock_enable: name=uart1_fck state=1 cpu_id=0
          <idle>-0     [000]   557.236450: clock_enable: name=uart2_fck state=1 cpu_id=0
          <idle>-0     [000]   557.236450: power_start: type=5 state=5 cpu_id=0
          <idle>-0     [000]   557.236481: clock_enable: name=uart3_fck state=1 cpu_id=0
          <idle>-0     [000]   557.236511: power_start: type=5 state=6 cpu_id=0
          <idle>-0     [000]   557.236572: power_start: type=5 state=7 cpu_id=0                  => Last resume SW trace in omap_sram_idle
          <idle>-0     [000]   557.236602: power_end: cpu_id=0                                   => Idle end

[edit] Results interpretation

The low power transition sequence is pictured as nested calls to functions:

Low power transition sequence.png

The measured results (from the HW and SW traces) are mapped to the pictured states according to the following table:

Pictured state Trace point Performed SW action
Idle enter start suspend System ready to enter idle
omap_sram_idle 1 suspend trace point 1 Enter omap_sram_idle
omap_sram_idle 2 suspend trace point 2 calculation of next power domains modes
omap_sram_idle 3 suspend trace point 3 Power domains pre-transition: program power domains current state, clear status
omap_sram_idle 4 suspend trace point 4 Context save for NEON
IO pad and chain new state programmed
omap_sram_idle 5 suspend trace point 5 Context save for PER, GPIO
Prepare UARTs 2&3
omap_sram_idle 6 suspend trace point 6 Context save for CORE and PRCM
Prepare UARTs 0&1
omap_sram_idle 7 suspend trace point 7 Context save for INTC
Program SDRC
WFI enter suspend trace point 8 GPIO HW trace
MPU context save in ASM (caches, registers, disable cache & prediction)
System OFF active - sys_off_mode, external clocks and power supplies to be measured with System OFF support -
Wake-up event: IO or GPT12 HW trace A (if IO wake-up)
GPT12=0 (if GPT wake-up)
-
System OFF inactive - sys_off_mode, external clocks and power supplies to be measured with System OFF support -
WFI exit GPT12 sampling right after WFI -
omap_sram_idle 1 GPT12 sampling at return from ASM code
Wake-up trace point 1
SDRC errata for ES3.1
MPU context restore
MMU restore and enable
omap_sram_idle 2 wake-up trace point 2 cpu_init
omap_sram_idle 3 wake-up trace point 3 SDRC settings restore
omap_sram_idle 4 wake-up trace point 4 Restore MMU tables
Enable caches and prediction
omap_sram_idle 5 wake-up trace point 5 Context restore for CORE, PRCM, SRAM, SMS
Resume UARTs 0&1
omap_sram_idle 6 wake-up trace point 6 Context restore for PER, INTC, GPIO
IO pad & chain
Resume UARTS 2&3
omap_sram_idle 7 wake-up trace point 7
GPT sampling
HW trace B
Power domains post-transition: program power domains current state, clear status
Restore SDRC settings
Idle exit exit suspend System out of idle

[edit] cpuidle results

[edit] PSI measurements results

Some timings measurements have been made by the TI PSI team. The following tables gives the results for the sleep and wake-up latencies for the C-states:

C states sleep latency.png
C states wake up latency.png

Note: in the linux code there is no C7/C8/C9 as in the table. C7 is MPU OFF + CORE OFF, which is identical to C9 in the table.

A model with the energy spent in the C-states has been built from the measured numbers. Here is the graph of the energy vs time:

C states os idle energy.png

Taking the minimum energy from the graph allows to identify the 4 energy-wise interesting C-states: C1, C3, C5, C9 and the threshold time for those C-states to be efficient:

C states data.png

Notes:

[edit] HW and SW measurements results

Here are the results for full RET and full OFF modes:

Sequence Time (us) - RET = C5 Time (us) - OFF = C9
From idle start till omap_sram_idle entry 0 0
From omap_sram_idle entry till WFI 152 1129
... HW sleep...
From WKUP event till WFI
(HW wake-up - GPT12)
183 1495
From WFI till return from omap34xx_save_cpu_context_wfi
(MPU context restore in ASM)
0 915
From return from omap34xx_save_cpu_context_wfi till end of omap_sram_idle
(System restore)
244 488
From end of omap_sram_idle till return from idle 30 30

[edit] Aggregated timings results

From the various sources of data the following figures are derived for all C-states (timings in us). The results are used in the cpuidle table (in arch/arm/mach-omap2/cpuidle34xx.c).

C-state Sleep lat Wake-up lat Threshold
C1: MPU WFI/ON - CORE ON 73.6 78 151.6
C2: MPU WFI - CORE INA 165 88.16 345 (1)
C3: MPU CSWR - CORE INA 163 182 345
C4: MPU OFF - CORE INA 2852 605 150000 (2)
C5: MPU CSWR - CORE CSWR 800 366 (3) 2120
C6: MPU OFF - CORE CSWR 4080 801 215000 (1)
C7: MPU OFF - CORE OFF 4300 12933 (4) 215000 (5)

Notes:

[edit] Results for individual power domains

Since cpuidle only manages the MPU (and depending power domains), the wake-up latency values for the other power domains must be measured separately. By adjusting the target states of the power domains (in /debug/pm_debug/xxxx_pwrdm/suspend) the following combinations have been measured. All values are in us:

[edit] HW and SW measurements results

The HW and SW tracers are used to measure the wake-up latencies of the power domains. The results are in the table:

PD measurements.png

Notes:

The significative power domains latencies are derived from the table as follows:

Power Domain RET latency OFF latency Table location
MPU 121 1830 (5), (6)
NEON 0 0 Included in MPU transitions?
CORE 153 3082 (3), (4)
PER 0 671 (1), (2)

Those figures are used in the code as the power domains wake-up latencies for RET and OFF, cf. arch/arm/mach-omap2/powerdomains3xxx_data.c.

[edit] ToDo

[edit] C1 performance problem: analysis

A serious performance degradation has been noticed during transfers from the NAND device using DMA, cf. http://marc.info/?l=linux-omap&m=133467316214021&w=2 for detailed discussion and patches [1]->[6]. The C1 C-state has a very big latency and degrades the use case performance.

[edit] Setup

[edit] Results

Here are the results on Beagleboard:

Patches applied Description Measured BW
[0] Initial code 2.1MB/s
[1] C1 Only 2.1MB/s
[1]+[2] No pre_ post_ 2.6MB/s
[1]+[5] No pwrdm_for_each_clkdm 2.3MB/s
[1]+[5]+[2] 2.8MB/s
[1]+[7] Regs cache (invalidate current states after WFI, invalidate prev states in clear_all_prev_pwrst) 2.2MB/s
[1]+[7]+[8] khilman's optimizations on pre_ post_ transitions + Regs cache 2.6MB/s
[1]+[7]+[8]+[9] per=core in C1 + khilman's optimizations on pre_ post_ transitions + Regs cache 2.8MB/s
[1]+[7]+[8]+[9]+[10] allow/deny_idle on pwrdm->clkdm[0] for mpu, core + per=core in C1 + khilman's optimizations on pre_ post_ transitions + Regs cache 3.0MB/s
[1]+[8]+[9]+[10] allow/deny_idle on pwrdm->clkdm[0] for mpu, core + per>=core in C1 + khilman's optimizations on pre_ post_ transitions 3.1MB/s
[1]+[5]+[6] No omap_sram_idle 3.1MB/s
[1]+[5]+[6]+No IDLE No CPUIDLE, No omap_sram_idle, all pwrdms to ON 3.1MB/s

This shows there is some serious performance issue with the C1 C-state but also that the patches [7]->[10] are providing some solutions.

Notes:

[edit] Main contributors

Here are the contributors inside __omap3_enter_idle (averaged over 30 samples):

Omap sram idle latency table.png

The main contributors are:

The HW idle time is 6.5us, which is neglectable compared to the SW overhead required to reach the idle state.

[edit] Use case idle stats

Using only the cpuidle tracepoints, the average times in idle are (averaged over 60 samples):

Use case Measured BW MB/s Description Idle us Active us
[1] 2.1 Cpuidle, omap_sram_idle, only C1 397 394
[1]+[7] 2.2 Regs cache, only C1 349 397
[1]+[7]+[8] 2.6 khilman optims + Regs cache, only C1 246 364
[1]+[7]+[8]+[9]+[10] 3.0 allow/deny_idle on pwrdm->clkdm[0] for mpu, core + pre=core in C1 + khilman's optims + Regs cache, only C1 178 403
[1]+[8]+[9]+[10] 3.1 allow/deny_idle on pwrdm->clkdm[0] for mpu, core + per>=core in C1 + khilman's optims, only C1 152 259
[1]+[5]+[6]+No IDLE 3.1 No cpuidle, no omap_sram_idle, all pwrdms to ON 113 477

Notes:

[edit] Registers cache accesses stats

The number of registers accesses are shown in PM debug using a registers cache statistics patch. The debug log shows the big number of accesses in this optimized use case ([0]+[7]+[8]) and the cache efficiency:

/ # cat /debug/pm_debug/count 
usbhost_pwrdm (ON),OFF:719,OSWR:0,CSWR:578,INA:0,ON:1298,RET-LOGIC-OFF:0,RET-MEMBANK1-OFF:0. Cache access 8279, hit 29, rate 0%
sgx_pwrdm (OFF),OFF:1,OSWR:0,CSWR:0,INA:0,ON:1,RET-LOGIC-OFF:0,RET-MEMBANK1-OFF:0. Cache access 8275, hit 26, rate 0%
core_pwrdm (ON),OFF:19,OSWR:0,CSWR:20,INA:14,ON:54,RET-LOGIC-OFF:0,RET-MEMBANK1-OFF:0,RET-MEMBANK2-OFF:0. Cache access 14960, hit 3966, rate 26%
per_pwrdm (ON),OFF:33,OSWR:0,CSWR:20,INA:0,ON:54,RET-LOGIC-OFF:0,RET-MEMBANK1-OFF:0. Cache access 18811, hit 7817, rate 41%
dss_pwrdm (ON),OFF:719,OSWR:0,CSWR:578,INA:0,ON:1298,RET-LOGIC-OFF:0,RET-MEMBANK1-OFF:0. Cache access 8279, hit 29, rate 0%
cam_pwrdm (OFF),OFF:1,OSWR:0,CSWR:1,INA:0,ON:1,RET-LOGIC-OFF:0,RET-MEMBANK1-OFF:0. Cache access 10907, hit 2657, rate 24%
neon_pwrdm (ON),OFF:19,OSWR:0,CSWR:1271,INA:7,ON:1298,RET-LOGIC-OFF:0. Cache access 12611, hit 2885, rate 22%
mpu_pwrdm (ON),OFF:19,OSWR:0,CSWR:1271,INA:7,ON:1298,RET-LOGIC-OFF:0,RET-MEMBANK1-OFF:0. Cache access 18390, hit 7396, rate 40%
iva2_pwrdm (OFF),OFF:1,OSWR:0,CSWR:1,INA:0,ON:1,RET-LOGIC-OFF:0,RET-MEMBANK1-OFF:0,RET-MEMBANK2-OFF:0,RET-MEMBANK3-OFF:0,RET-MEMBANK4-OFF:0. Cache access 8281, hit 31, rate 0%
usbhost_clkdm->usbhost_pwrdm (1)
sgx_clkdm->sgx_pwrdm (0)
per_clkdm->per_pwrdm (16)
cam_clkdm->cam_pwrdm (0)
dss_clkdm->dss_pwrdm (1)
core_l4_clkdm->core_pwrdm (21)
core_l3_clkdm->core_pwrdm (4)
d2d_clkdm->core_pwrdm (0)
iva2_clkdm->iva2_pwrdm (0)
neon_clkdm->neon_pwrdm (0)
mpu_clkdm->mpu_pwrdm (0)
prm_clkdm->wkup_pwrdm (0)
cm_clkdm->core_pwrdm (0)
/ # 

It can be noted that some power domains have a cache hit rate of 0%, because they are unused (i.e. not controlled by any driver). Still a lot of registers accesses are performed in the idle path.

Here is the statistics patch:

diff --git a/arch/arm/mach-omap2/pm-debug.c b/arch/arm/mach-omap2/pm-debug.c
index ed9846e..632db47 100644
--- a/arch/arm/mach-omap2/pm-debug.c
+++ b/arch/arm/mach-omap2/pm-debug.c
@@ -119,6 +119,9 @@ static int pwrdm_dbg_show_counter(struct powerdomain *pwrdm, void *user)
		seq_printf(s, ",RET-MEMBANK%d-OFF:%d", i + 1,
				pwrdm->ret_mem_off_counter[i]);

+	seq_printf(s, ". Cache access %d, hit %d, rate %d%%",
+			pwrdm->pwrdm_cache_access, pwrdm->pwrdm_cache_hit,
+			(100 * pwrdm->pwrdm_cache_hit)/pwrdm->pwrdm_cache_access);
	seq_printf(s, "\n");

	return 0;
diff --git a/arch/arm/mach-omap2/powerdomain.c b/arch/arm/mach-omap2/powerdomain.c
index 537595c..1b80ee9 100644
--- a/arch/arm/mach-omap2/powerdomain.c
+++ b/arch/arm/mach-omap2/powerdomain.c
@@ -711,10 +711,12 @@ static int pwrdm_cache_read(struct powerdomain *pwrdm, int index, int *value)
	if (index >= PWRDM_CACHE_SIZE)
		return -EINVAL;

+	pwrdm->pwrdm_cache_access++;
	if (!(pwrdm->cache_state & (1 << index)))
		return -ENODATA;

	*value = pwrdm->cache[index];
+	pwrdm->pwrdm_cache_hit++;
	return 0;
}

diff --git a/arch/arm/mach-omap2/powerdomain.h b/arch/arm/mach-omap2/powerdomain.h
index 92386bd..a9eae1c 100644
--- a/arch/arm/mach-omap2/powerdomain.h
+++ b/arch/arm/mach-omap2/powerdomain.h
@@ -172,6 +172,10 @@ struct powerdomain {
	struct mutex lock;
	int state;
	int cache[PWRDM_CACHE_SIZE];
+
+	int pwrdm_cache_access;
+	int pwrdm_cache_hit;
+
	long cache_state;
	unsigned state_counter[PWRDM_MAX_FUNC_PWRSTS];
	unsigned ret_logic_off_counter;

[edit] Links

[edit] Device latency patches

PM QoS device constraint code patches

T2 scripts information page

[edit] Attachments

[edit] Kernel patches and config

File:OMAP latency measurements patches and config.tar.gz

[edit] Presentation slides: Fosdem/ELC 2012

File:ELC-2012-jpihet-DeviceLatencyModel.pdf

--Jpihet 24 Apr 2012

Personal tools
Namespaces
Variants
Actions
Navigation
Toolbox