• Hardware Platform (Jetson / GPU) Jetson Orin Nano
• DeepStream Version 7.0
• JetPack Version (valid for Jetson only) 6.1
• TensorRT Version 8.6.2
I am trying to run a Deepstream application inside this container nvcr.io/nvidia/deepstream:7.0-triton-multiarch but I am getting these errors.
The test pipeline runs just fine outside the container on another Jetson with same Deepstream version.
The pipeline consists of reading an mp4 file, making inferences using YoloV8 and saving another mp4 file with the inferences.
These are the plugin initializations for reference:
source = Gst.ElementFactory.make("filesrc", "file-source")
demuxer = Gst.ElementFactory.make("qtdemux", "qtmux-0")
h264parser = Gst.ElementFactory.make("h264parse", "h264-parser")
decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "nvvideo-converter")
nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
nvvidconv2 = Gst.ElementFactory.make("nvvideoconvert", "nvvideo-converter2")
encoder = Gst.ElementFactory.make("x264enc", "encoder")
codeparser = Gst.ElementFactory.make("h264parse", "h264-parser2")
container = Gst.ElementFactory.make("qtmux", "qtmux-1")
sink = Gst.ElementFactory.make("filesink", "file-sink")
I am accessing both Jetsons by SSH. The first one was for test purposes that’s why I ran the test pipeline outside the container. Now I need to replicate it for other machines and we will need to use a container to make replication easier.