Hi guys!
I already read some post about this topic, but it doesn’t help me.
I am using the Capture and Render functions from jetson-utils and then I use some OpenCV cuda functions.
I use two 60fps, Full-HD mp4-input videos. For one frame the needed time is round about 16,6 ms for 60 fps.
Here is my problem:
Only for some frames I get the two warnings/messages:
1.
nvbuf_utils: dmabuf_fd 1106 mapped entry NOT found
nvbuf_utils: NvReleaseFd Failed… Exiting…
and
2.
[gstreamer] gstreamer mysink taglist, video-codec=(string)“H.264\ (High\ Profile)”, language-code=(string)en, bitrate=(uint)5992786, minimum-bitrate=(uint)865440, maximum-bitrate=(uint)23660640;
For the first problem: as consequence the running time of the current frame doubles from 16 ms up to 33 ms (see picture).
I would like to know, how to avoid this fact, that one frame do not behaves fine in this pipeline of streaming.
My suggestion is that the one frame is left out and, for the next frame the timer includes the time for the failing frame. So the needed computation time doubles.
The second warning doesn’t seem to have an negativ impact, but I would like to know its meaning and how to avoid it too.
Another question: there another possibility to capture frames of a video directly in the GPU of the jetson boards to use opencv function? My solution was using jetson-inference, but maybe there is still another solution.
Thank you for your advices! I will be glad to read them!