We are investigating it internally. Will update when there is new finding.
When the decoded frame pushed into NVMM buffer and then copied into CPU buffer, So that frame in NVMM buffer then removed from this buffer?
1- In your opinion, Is this correctly work for jetson xavier nx? It’s work for jetson nano.
2- Is it possible to use the decoded of frames from NVMM buffer for processing without copy to CPU buffer? If so, How?
3- If I want to use the decoded of frames for TPU dongle that connected to jetson nano via USB I need to copy the decoded frames from NVMM buffer to CPU buffer? If is possible to use the decoded frames from nvvm buffer directly into USB TPU?
4- If I want to use very efficient mode for decoding the streams, I have to use pure gstreamer? How I can use the buffers of gsteamer in numpy array for processing?
I don’t know if it helps, but I have the same issue. And during testing I found out that if I open a second connection to the problematic camera with for instance VLC on a laptop, then decoding with nvv4l2decoder works.
I have attached a log of a working (ok) and non working situation (wrong)
output_v_ok.txt (20.9 KB)
output_v_wrong.txt (17.3 KB)
It seems that the udpsrc caps are detected differently. Might that be the issue?
If there is anything else I can test, please let me know.
If you upgrade to JP4.4.1(r32.4.4), you should be able to run with h264parse:
rtspsrc(or udpsrc) ! rtph264depay ! h264parse ! nvv4l2decoder ! ...
Please give it a try.
Ok, sorry, I though the issue was not fixed yet.
I am already on r32.4.4
R32 (release), REVISION: 4.4, GCID: 23942405, BOARD: t186ref, EABI: aarch64, DATE: Fri Oct 16 19:37:08 UTC 2020
So I probably have a different issue then.
Any other ideas how this can happen. I can reproduce it perfectly. Seems to be a race condition of some sort.
For UDP streaming, you can refer to
If the issue is still present, please make a new post with steps so that we can replicate it and check.