I cretat decoders dec0 = NvVideoDecoder::createVideoDecoder("dec0"); dec1 = NvVideoDecoder::createVideoDecoder("dec1");
two threads: pthread_create(dec0_tid, NULL, dec_capture_loop_fcn, nvidia_data); pthread_create(dec1_tid, NULL, dec_capture_loop_fcn, nvidia_data);
two renders: auto renderer0 = NvEglRenderer::createEglRenderer(render_name.c_str(), w, h, x, y); auto renderer1 = NvEglRenderer::createEglRenderer(render_name.c_str(), w, h, x, y);
in the dec_capture_loop_fcn0: renderer0->render(fd); renderer1->render(fd);
It’s the newest jetpack 4.6.
There are two threads to do decoder->render.
But the two display videos are abnormal.
I find that render will take a long time (maybe 30-60 ms) sometimes.
If I delete render(fd), the costing time is fast (about 10 ms).
Hi,
Creating multiple renders in single process may not achieve target performance. Please composite the sources into single video plane through NvBufferComposite() so that you can create single render to render the video plane.
I see 14.multivideo_decode that it createed several renders and several threads(a thread for a render). Is that the same performance of NvBufferComposite?
Hi,
By default it does not support rendering in 14_multivideo_decode. If you run with –help, will see this NOTE:
NOTE: Currently multivideo_decode to be only run with --disable-rendering Mandatory
for rendering frames from multiple sources, we would suggest use NvBufferComposite() to composite the sources into single video plane. We have similar implementation called nvmultistreamtiler in DeepStream SDK. Woud suggest have same implementation in using jetson_multimedia_api.
I see that NvBufferComposite(m_dmabufs, m_compositedFrame, &m_compositeParam); g_renderer->render(m_compositedFrame);
if I receive several rtsp stream and decode in different threads. That means m_dmabufs in different threads. That means I should have a render thread. Do I need to have a mutex lock the m_dmabufs or I can do it like that while(1) { NvBufferComposite(m_dmabufs, m_compositedFrame, &m_compositeParam); g_renderer->render(m_compositedFrame); }
Hi,
For thie use-case, you would need multiple decoding threads and one rendering thread. Would need queue to put Nvbuffer, and mutex to protect NvBuffer read/write.