GStreamer + TensorRT


I’m trying to modify to run my network, and run inference on a video stream from either a file .mp4 or rtsp source.

I have successfully implemented my network and it’s working when used with images but I’m having troubles capturing the video stream and feeding it to my engine.

It’s not clear to me:

  • which plugins in the Gstreamer pipeline I should use
  • how to capture RGB frame instead of RGBA.

I’m using to run the following pipelines:

  • rtspsrc location=rtsp://user:pass@IP/media1.sdp protocols=udp latency=0 ! decodebin ! videoconvert ! appsink name=mysink
  • filesrc location=test.mp4 ! decodebin ! nvvidconv ! video/x-raw, width=480, height=640 ! appsink name=mysink

I’m able to render the content of the stream on video using GlDisplay but when I try to run inference I never receive the output from my engine.

This is the output of an execution:

What am i doing wrong?



Could you test it with a live camera first?
Please remember to update the parameter in jetson_inference: