JetsonNano - Using triton inference server via the DeepStream gstreamer plugin

Hi!

I’m trying to get the nvinferserverbin gstreamer plugin working on a jetson nano. I have the following setup on a Jetson Nano Development board.

Jetpack - 4.6.2 (installed via SDKmanager)
DeepStream - 6.0.1 (installed via SDKmanager)
tritonserver - 2.19.0 (installed in /opt/tritonserver via github download & unpacking of tritonserver2.19.0-jetpack4.6.1.tgz)

I would like to get the plugin working either using the embedded triton server shared library, or speaking to a remote triton server over the network via a gstreamer plugin. I’ve run into the following issues:

First Issue: I can’t see nvinferserver in the list of gststreamer plugins viewable with gst-inspect-1.0, I can only see nvinferserverbin which I’ll use in the following examples.

Second Issue: When I run gst-inspect-1.0 nvinferserverbin I get a number of the following errors in the output:

(gst-inspect-1.0:9924): GLib-GObject-CRITICAL **: 16:36:49.568: g_object_get_property: assertion 'G_IS_OBJECT (object)' failed
flags: readable, writable, changeable only in NULL or READY state

Third Issue: Before I tried to configure the plugin to use a specific model I tried to run a basic pipeline which includes the plugin, but get the following error despite the capsfilter matching what the plugin claims to support via gst-inspect-1.0:

gst-launch-1.0 nvarguscamerasrc num-buffers=10 ! 'video/x-raw(memory:NVMM), width=224, height=224, format=NV12, framerate=10/1' ! nvinferserverbin model_name=$MODEL model_repo=$MODEL_REPO ! fakesink

Which results in the following output:

(gst-launch-1.0:11081): GStreamer-WARNING **: 17:02:35.670: Element dsnvinferserverbin0 has an ALWAYS template sink, but no pad of the same name

(gst-launch-1.0:11081): GStreamer-WARNING **: 17:02:35.670: Element dsnvinferserverbin0 has an ALWAYS template src, but no pad of the same name
WARNING: erroneous pipeline: could not link nvarguscamerasrc0 to dsnvinferserverbin0, dsnvinferserverbin0 can't handle caps video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)10/1

I then tried running the plugin while setting the model_repo flag to point to a valid model repository folder (which had worked fine with a standalone triton inference server) and passed a valid model name in via the model_name flag and set the capsfilter to match the expected resolution for that model and still got the above error.

  • Is there anything I’m missing or doing wrong?
  • Should the above software configuration work for running nvinferserver?
  • Is there a working example of a standalone gstreamer pipeline command using nvinferserver I could try?

Ok, I think I have what I need working now. I’m not sure what the nvinferserverbin plugin is, but it seems different to nvinferserver, and there’s a few manual steps I had to do once Deepstream 6.0.1 was installed to get it working nvinferserver working.

Firstly install tritonserver with the following script which should be installed as part of Deepstream:

sudo /opt/nvidia/deepstream/deepstream-6.0/samples/triton_backend_setup.sh

That should install a working version of the nvinferserver plugin, but it still might not show up as an available plugin in gstreamer until you delete the gstreamer plugin cache.

rm -rf ${HOME}.cache/gstreamer-1.0/

At this point you may need to reboot the Jetson Nano, once you’ve done that you should be able to see the plugin listed with gst-inspect-1.0:

gst-inspect-1.0 nvinferserver

I also found it difficult to find information on the configuration file format for the nvinferserver plugin, but there should be an example one located at:

/opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app-triton/config_infer_plan_engine_primary.txt

With deepstream 6.1, in addition to sudo /opt/nvidia/deepstream/deepstream-6.1/samples/triton_backend_setup.sh I also needed to run

export GST_PLUGIN_SYSTEM_PATH=/opt/nvidia/deepstream/deepstream/lib

before

gst-inspect-1.0 nvinferserver

could find nvinferserver