GStreamer + TensorRT

Hello,

I’m trying to modify https://github.com/dusty-nv/jetson-inference/ to run my network, and run inference on a video stream from either a file .mp4 or rtsp source.

I have successfully implemented my network and it’s working when used with images but I’m having troubles capturing the video stream and feeding it to my engine.

It’s not clear to me:

  • which plugins in the Gstreamer pipeline I should use
  • how to capture RGB frame instead of RGBA.

I’m using RTSP stream GStreamer pipeline fixes by omaralvarez · Pull Request #93 · dusty-nv/jetson-inference · GitHub to run the following pipelines:

  • rtspsrc location=rtsp://user:pass@IP/media1.sdp protocols=udp latency=0 ! decodebin ! videoconvert ! appsink name=mysink
  • filesrc location=test.mp4 ! decodebin ! nvvidconv ! video/x-raw, width=480, height=640 ! appsink name=mysink

I’m able to render the content of the stream on video using GlDisplay but when I try to run inference I never receive the output from my engine.

This is the output of an execution: [TRT] Loading network profile from engine...[TRT] MY_ENGINE loaded[TRT] C - Pastebin.com

What am i doing wrong?

Thanks

Hi,

Could you test it with a live camera first?
Please remember to update the parameter in jetson_inference:
[url]https://github.com/dusty-nv/jetson-inference/blob/master/imagenet-camera/imagenet-camera.cpp#L37[/url]

Thanks.