I need to use two monitors in TX1, use DP + DSI, DP as the main monitor to display UI interface and decode video, DSI as secondary monitor display camera video
I probably learned about a few solutions:
1, gstreamer, I can use ‘- display-id’ in nvoverlaysink to specify display
2, mmapi-renderer, did not find the corresponding parameters, Xlib call is obtained from the DefaultScreen display, we can modify the environment variable export DISPLAY =: x.0, but the system can only use one environment variable
3, mmapi-drm, through the parameters ‘connector’, I can select the physical interface on the card, but drm is no UI support.
My question is:
I would like to use mmapi architecture for multimedia development, and using QT as a UI, is there a good way to achieve dual display shows different content of the video?
Does solution 2 work if export parameters in two terminals?
Thank you for your reply, my hardware is not ready yet.
1, my test:
I have done a test, the two terminals were set up different environment variables, terminal 1 no matter how set, will not affect the proc of Terminal 2.
View the environment variables of the proc is DISPLAY=:1.0
2, One of my ideas:
Suppose I have two proc A and B
I envisioned that I could set it up in the startup script
Then it should be possible to achieve two processes using different display interfaces.
There is an important question: whether the two processes can share memory through ‘dma_buf’?
For example, in proc_A, pipeline is ‘dec->vic’, vic’s capture_plane use ‘dma_buf’, can the fd be given to the proc_B’s renderer to show?
May I ask how do you configure X server to have DISPLAY 0.0 and 1.0 env variable?
I did not configure X server.
I just open a terminal, first input shell command: export DISPLAY=:1.0 ( or export DISPLAY=:0.0), then run the mmapi’s sample
What is the result of xrandr when you export these variables?
xrandr and the other proc are the same, if the environment variable configuration is not correct, can not run.
ubuntu@tegra-ubuntu:~/tegra_multimedia_api/samples/00_video_decode$ export DISPLAY=:0.0
Invalid MIT-MAGIC-COOKIE-1 keyCan’t open display :0.0
ubuntu@tegra-ubuntu:~/tegra_multimedia_api/samples/00_video_decode$ export DISPLAY=:1.0
Screen 0: minimum 8 x 8, current 1920 x 1080, maximum 16384 x 16384
HDMI-0 connected primary 1920x1080+0+0 (normal left inverted right x axis y axis) 1150mm x 650mm
1280x720 60.00 + 59.94 50.00
1920x1080 60.00* 59.95 50.00
640x480 59.94 59.94
What I exactly wanted to ask was how to set two display env var for separate monitors. It looks like it cannot run, right?
Because my hardware is not ready, hdmi output is the default DISPLAY=:1.0
The following is only one of my assumptions.
I want to verify the possibility of this assumption
May I ask what do you mean “drm is no UI support”? Could you elaborate “UI support”? Does it imply Qt is supported by EGL but not DRM?
Do you mean that QT can support DRM? I am very interested in this
I have a few questions
1, what version of QT can support DRM? is it must be the embedded version ?
2, through the DRM, QT and video can be superimposed display? Video output from the main window, QT superimposed on the windowB(or C), with translucent effect
Looking forward to your reply.
Sorry li_lin, that is not what I meant…
I Just wanted to make sure what is your request. It looks like you need Qt and it is based on egl.
Don’t know if this is particularly relevant, but you can’t use DISPLAY=:1.0 unless you have already started a second X server. “:0.0” and “:0.1” would be relevant for two monitors of one X instance, “:0.0” and “:1.0” would be relevant for two X servers with one display on each server.
Sorry for running into this issue again. Our team would like to know if you or anyone else still need such usecase.
We solved the problem with DRM on the R28.1.
HDMI use CRTC0, use WIN_A, drmModePageFlip() func to send BUF,
DSI use CRTC1, use WIN_B, drmModeSetPlane() func to send buf,
drmModeSetPlane() does not need poll waiting for the DRM driver to return BUF, so the two display will not conflict because of drm_fd .
You can also set HDMI and DSi using their own WIN_B, use setPlane func to send buf.
Thanks for feedback. I am glad to hear you finished your project.
Moreover, I also want to ask the your development on Qt. Did you use Qt + DRM in the end?
Our team would like to know if there is anything we can help on Qt(e.g. integrate it with our native graphic system)
thanks you ~
We use the QT+DRM architecture, QT5.9 version can support DRM, configure QT’s QPA backends to eglfs_kms_egldevice, and modify QT plugin source code, compatible with our DRM library on L4T.
QT plugin source code has been mainly modified:
1, drmOpen device name is “nv-drm”
2, do not allow QT to modify the CRTC configuration, directly using the existing configuration to display
The way we use QT may not be very common, and most people use X11 as a display backend.
Is it possible to share or any reference how exactly is below change doing?
"QT plugin source code has been mainly modified:
1, drmOpen device name is "nv-drm"
2, do not allow QT to modify the CRTC configuration, directly using the existing configuration to display"
Also, do you use our default rootfs to enable eglfs in QPA?
1、yes, The QT target file system we use is the L4T 28.1 rootfs
2、QT source modification is in the attachment
qt-drm(qt-everywhere-src).rar (194 KB)