Gstreamer Pipeline for UDP in imagenet-camera TensorRT - Jetson Tx2

Hi everyone,

I have been trying the whole day to create a pipeline that is able to receive a UDP stream on the Jetson and perform inference on the video.

I have tested the following pipeline and it works well and shows the video stream on the screen:

gst-launch-1.0 udpsrc port=5000 ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! h264parse ! omxh264dec ! nvoverlaysink async=false -e

When I change the sink for TensorRt gstCamera.cpp it does not show a video.

I also tried what has been discussed in the following thread, but without any successes.

https://devtalk.nvidia.com/default/topic/1026076/jetson-tx2/can-imagenet-camera-link-with-ip-camera-/1

Can anyone help me solve this issue?

Hi maycondouglasd, have you seen this fork which was modified to work with IP camera?

https://github.com/Abaco-Systems/jetson-inference-gv

What changes did you try making to the pipeline in gstCamera? Basically you’ll want to substitute your pipeline (minus the nvoverlaysink) for the nvcamerasrc parts here:

https://github.com/dusty-nv/jetson-inference/blob/8ed492bfdc9e1b98f7711cd5ae224ce588635656/util/camera/gstCamera.cpp#L338

Both the original way and the way you are proposing with omxh264dec will output NV12 format, so if you get the pipeline working, the formats should line up. That will enable you to continue using the gstCamera::ConvertRGBA() function as it’s used today in the imagenet-camera sample.