Display driver enhancement topics

From OMAPpedia

(Difference between revisions)
Jump to: navigation, search
 
(5 intermediate revisions not shown)
Line 1: Line 1:
-
* Configurability: Add ability to handle multiple paths between overlays/managers/panels <br>
+
'''[Following are notes from March-2011]''' <br>
-
* Scalability: Make it easier to add new overlays / managers in DSS <br>
+
*Configurability: Add ability to handle multiple paths between overlays/managers/panels
 +
**Solution might be to use MCF or DRM… Need to compare the two.
 +
**Can they coexist?
-
* Add writeback support <br>
+
*Scalability: Make it easier to add new overlays / managers in DSS
 +
**Can locking in the dss manager be optimized, so that there is less chance of blocking..?
 +
**There may be a usecase (e.g. split-display or extended desktop) where the size of overlays may have to be changed dynamically.
 +
***But isn’t that handled by the driver when size of buffer/frame passed by user changes?
 +
***How is multi-monitor (extended-desktop) support handled in PC world?
 +
***Some synchronization mechanism between rendering of overlays may also be required in hardware.
-
* FB - V4L2 inter-operability  <br>
+
*Add writeback support
-
* Support access to HDMI via both Audio and Video devices <br>
+
*FB - V4L2 inter-operability
 +
**On DSI panels in command mode, when the updates are coming through FB as well as V4L2, how to synchronize & update the panel..?
 +
**Need a central piece of code that manages the updates.
 +
***PM inactivity timer should also be petted every time there is an update from either FB or V4L2.
-
* Add ability to switch channel-out of overlays per-frame, such that e.g. one iteration goes to writeback pipeline, next iteration to manager and so on. <br>
+
*Support access to HDMI via both Audio and Video devices
 +
**In DRM, there is an HDMI driver from some other folks in community. Need to see how they handle audio.  
-
* HDMI: Add standard way of HPD / user-space notification mechanism. <br>
+
*HDMI: Add standard way of HPD / user-space notification mechanism. Other standard / common code (e.g. EDID or AVI info frames related code) can be designed such that its reusable across SOCs.
-
* Power management: current design is that if any one panel is enabled, some minimum clocks are used. If the panel is a smart panel, we can cut the clocks, but this is not supported currently. <br>
+
*Add ability to switch channel-out of overlays per-frame, such that e.g. one iteration goes to writeback pipeline, next iteration to manager and so on.
 +
**Suppose you want to do 1/8 downscaling. You can do that by using one pipeline for ¼ and then writeback to do 1/2. This uses 2 pipelines. If writeback can pipe data back to the same pipeline, then it can be sent out (in alternate cycles).  
 +
**Can MCF/DRM handle this?
-
* Support DRM
+
*Power management: Refer http://omappedia.org/wiki/Notes_on_Display_power_management_and_scope_for_enhancement
-
** support security model for direct rendering.. ie. even though rendering may be direct, not indirect thru display server, permission to put pixels on the screen must still come thru the display server.  (This comes as part of DRM, but any non-DRM approach would have to re-invent this mechanism.)
+
-
*** see slide 7: http://www.slideshare.net/moriyoshi/x-architectural-overview
+
-
** separation of buffers (ie. drm_framebuffer objects) from display path.. so at runtime we can dynamically switch between rendering via overlay/pipe(s) and/or GPU without reallocating buffers
+
-
*** building on top of this we can implement virtual overlays to handle use-cases where there are more videos being rendered than there are pipes available in DSS..  a virtual overlay means using the GPU (or writeback pipe, or 2d hw in 4460/omap5 and later) to do YUV->RGB scale/blit of multiple different video streams into a shared overlay layer which is the same size as the framebuffer.  This preserves the semantics of a non-destructive overlay and per-pixel ARGB blending, and bypasses the windowmanager/compositing step for video, so it is still more efficient then YUV->RGB blitting into the framebuffer layer.
+
-
*** I'm a bit undecided at this point whether virtual overlays should/could all be hidden in the driver, or whether there should be a userspace component in the display server.  Typically, if you look at vaapi (libva) it is using a sort of extension to DRI to render frames of video, in a similar way to how flipping is handled for GL apps.  If we took this approach, then it could be the display server that decides to fall back to using virtual overlays.  I guess there must be some analogy to DRI for android.
+
-
*** also, separation of buffer and pipe is needed in the case of virtual display spanning multiple monitors, so that video window can also span multiple monitors.
+
-
** hotplug notification to userspace, and detection of monitor state (connected/disconnected) is also part of DRM
+
-
* Support MCF
+
*Support DRM
 +
**support security model for direct rendering.. ie. even though rendering may be direct, not indirect thru display server, permission to put pixels on the screen must still come thru the display server. (This comes as part of DRM, but any non-DRM approach would have to re-invent this mechanism.)
 +
***see slide 7: http://www.slideshare.net/moriyoshi/x-architectural-overview
 +
**separation of buffers (ie. drm_framebuffer objects) from display path.. so at runtime we can dynamically switch between rendering via overlay/pipe(s) and/or GPU without reallocating buffers
 +
***building on top of this we can implement virtual overlays to handle use-cases where there are more videos being rendered than there are pipes available in DSS.. a virtual overlay means using the GPU (or writeback pipe, or 2d hw in 4460/omap5 and later) to do YUV->RGB scale/blit of multiple different video streams into a shared overlay layer which is the same size as the framebuffer. This preserves the semantics of a non-destructive overlay and per-pixel ARGB blending, and bypasses the windowmanager/compositing step for video, so it is still more efficient then YUV->RGB blitting into the framebuffer layer.
 +
***I'm a bit undecided at this point whether virtual overlays should/could all be hidden in the driver, or whether there should be a userspace component in the display server. Typically, if you look at vaapi (libva) it is using a sort of extension to DRI to render frames of video, in a similar way to how flipping is handled for GL apps. If we took this approach, then it could be the display server that decides to fall back to using virtual overlays. I guess there must be some analogy to DRI for android.
 +
***also, separation of buffer and pipe is needed in the case of virtual display spanning multiple monitors, so that video window can also span multiple monitors.
 +
**hotplug notification to userspace, and detection of monitor state (connected/disconnected) is also part of DRM
 +
 
 +
*Support MCF  
 +
 
 +
*Make omap_dss_probe a little tolerant of failures.
 +
**There are multiple panels / components that get initialized in the omap_dss_probe() - such as venc_init, dsi_init, hdmi_init, etc. If any of these fail, we abort the probe. Is should be possible to go ahead and try the remaining component inits if one fails. E.g. if venc_init fails, it would be nice to go ahead and try and see if dsi_init works.
 +
**omapfb is also quite picky: when it starts, it'll try to get all the dss displays, and if it fails to get one of those (panels probe failed, or similar), it will fail and exit.
 +
 
 +
*Resources of FB vs V4L2 is hardcoded via bootargs (num_buffers, etc) – dynamic configuration is not possible.
 +
**Make pipelines as resources that are occupied on the fly. So we’d have /dev/fb0-4 and /dev/v4l2/video0-4, and when user actually opens one of the nodes, one pipeline is requested (request/release mechanism).
 +
**Can MCF/DRM handle this issue too?
 +
 
 +
*Support DSI – Video mode
 +
 
 +
*Separate various components as modules – e.g. dsi, venc, hdmi, etc. Today they all build into one dss module.
 +
 
 +
*DSI1 and DSI2 should have no driver replication. Same driver should be used with 2 instances of devices.
 +
 
 +
*We should create standard DSI / DPI interfaces in DSS2.The idea is to ‘hide’ the fact from panel that it is talking to ‘OMAP DSS2’, so it can use standard interface for DSI/DPI. That way same dsi-panel driver can be used between omap and netra.
<hr>
<hr>
* [http://omappedia.org/wiki/Display_Drivers_Domain_Wiki BACK to Display Domain Wiki]
* [http://omappedia.org/wiki/Display_Drivers_Domain_Wiki BACK to Display Domain Wiki]

Latest revision as of 05:23, 16 April 2011

[Following are notes from March-2011]


Personal tools
Namespaces
Variants
Actions
Navigation
Toolbox