I have been trying the whole day to create a pipeline that is able to receive a UDP stream on the Jetson and perform inference on the video.
I have tested the following pipeline and it works well and shows the video stream on the screen:
gst-launch-1.0 udpsrc port=5000 ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! h264parse ! omxh264dec ! nvoverlaysink async=false -e
When I change the sink for TensorRt gstCamera.cpp it does not show a video.
I also tried what has been discussed in the following thread, but without any successes.
Can anyone help me solve this issue?