Triton refrence graph throws an error "Could not get static pad 'sink'"

Environment

TensorRT Version : 8.0.1.6-1+cuda10.2
GPU Type : Jetson Xavier NX Developer Kit GPU
Nvidia Driver Version : Cannot find it but it should be the one with the OS image of NX developer kit
CUDA Version : 10.2
CUDNN Version : 8.2.1.32-1+cuda10.2
Operating System + Version : Ubuntu 18.04.5 LTS
Python Version (if applicable) : 2.7.17
TensorFlow Version (if applicable) : N/A
PyTorch Version (if applicable) : N/A
Baremetal or Container (if container which image + tag) : Baremetal Jetson Xavier NX Developer Kit

Relevant Files

Refrence graphs of DeepStream SDK 6.0 are installed and many of them work fine following the instructions in README files.

Steps To Reproduce

  • Fresh install of Xavier NX with JetPack 4.6
  • Install DeepStream SDK 6
  • Install Graph Composer runtime for Jetson
  • Install Graph Composer reference apps
  • Install triton prerequisites for triton reference graph
  • Run the below:
cd /opt/nvidia/deepstream/deepstream-6.0/reference_graphs/deepstream-triton

/opt/nvidia/graph-composer/execute_graph.sh deepstream-triton.yaml deepstream-triton.parameters.jetson.yaml -d target_triton_aarch64.yaml

The following is emitted to the terminal and the error messages are near the end of the snippet below:

Graphs: deepstream-triton.yaml,deepstream-triton.parameters.jetson.yaml
Target: target_triton_aarch64.yaml
===================================================================
Running deepstream-triton.yaml
===================================================================
[INFO] Writing manifest to /tmp/ds.deepstream-triton/manifest.yaml 
2021-11-22 21:40:13.374 INFO  gxf/gxe/gxe.cpp@98: Creating context
2021-11-22 21:40:13.413 INFO  gxf/gxe/gxe.cpp@85: Loading app: '/opt/nvidia/deepstream/deepstream-6.0/reference_graphs/deepstream-triton/deepstream-triton.yaml'
2021-11-22 21:40:13.413 INFO  gxf/std/yaml_file_loader.cpp@59: Loading GXF entities from YAML file '/opt/nvidia/deepstream/deepstream-6.0/reference_graphs/deepstream-triton/deepstream-triton.yaml'...
2021-11-22 21:40:13.424 INFO  gxf/gxe/gxe.cpp@85: Loading app: '/opt/nvidia/deepstream/deepstream-6.0/reference_graphs/deepstream-triton/deepstream-triton.parameters.jetson.yaml'
2021-11-22 21:40:13.424 INFO  gxf/std/yaml_file_loader.cpp@59: Loading GXF entities from YAML file '/opt/nvidia/deepstream/deepstream-6.0/reference_graphs/deepstream-triton/deepstream-triton.parameters.jetson.yaml'...
2021-11-22 21:40:13.426 INFO  gxf/gxe/gxe.cpp@153: Initializing...
2021-11-22 21:40:13.479 INFO  extensions/nvdsbase/nvds_scheduler.cpp@266: This program is linked against GStreamer 1.14.5 

2021-11-22 21:40:13.480 INFO  extensions/nvdsmuxdemux/nvstreammux.hpp@27: initialize: nvstreammux nv_ds_stream_mux337..7710

2021-11-22 21:40:13.480 INFO  extensions/nvdstriton/nvinferserverbin.hpp@23: initialize: nvinferserverbin nv_ds_triton341..fba8

2021-11-22 21:40:13.481 INFO  extensions/nvdsvisualization/nvosdbin.hpp@24: initialize: nvosdbin nv_ds_osd345..e4e0

2021-11-22 21:40:13.481 INFO  extensions/nvdsoutputsink/nvvideorenderersinkbin.hpp@24: initialize: nvvideorenderersinkbin nv_ds_video_renderer349..28d0

2021-11-22 21:40:13.481 INFO  extensions/nvdsvisualization/nvtilerbin.hpp@23: initialize: nvtilerbin nv_ds_tiler352..fc88

2021-11-22 21:40:13.482 INFO  gxf/gxe/gxe.cpp@160: Running...
2021-11-22 21:40:13.482 INFO  extensions/nvdsbase/nvds_scheduler.cpp@117: Scheduling 6 elements and 1 components
2021-11-22 21:40:13.482 INFO  extensions/nvdssource/multi_uri_src_bin.cpp@333: create_element: NvDsMultiSrcInput nv_ds_multi_src_input334..71d0

2021-11-22 21:40:13.482 INFO  extensions/nvdssource/multi_uri_src_bin.cpp@379: bin_add: bin nv_ds_multi_src_input334..71d0

2021-11-22 21:40:13.483 INFO  extensions/nvdsmuxdemux/nvstreammux.hpp@37: create_element: nvstreammux nv_ds_stream_mux337..7710

2021-11-22 21:40:13.518 INFO  extensions/nvdsmuxdemux/nvstreammux.hpp@61: bin_add: nvstreammux nv_ds_stream_mux337..7710

2021-11-22 21:40:13.519 INFO  extensions/nvdstriton/nvinferserverbin.hpp@31: create_element: nvinferserverbin nv_ds_triton341..fba8

2021-11-22 21:40:13.614 INFO  extensions/nvdstriton/nvinferserverbin.hpp@55: bin_add: nvinferserverbin nv_ds_triton341..fba8


(gxe:13418): GLib-GObject-CRITICAL **: 21:40:13.614: g_object_setv: assertion 'G_IS_OBJECT (object)' failed
2021-11-22 21:40:13.614 INFO  extensions/nvdsvisualization/nvosdbin.hpp@32: create_element: nvosdbin nv_ds_osd345..e4e0

2021-11-22 21:40:13.620 INFO  extensions/nvdsvisualization/nvosdbin.hpp@56: bin_add: nvosdbin nv_ds_osd345..e4e0

2021-11-22 21:40:13.620 INFO  extensions/nvdsoutputsink/nvvideorenderersinkbin.hpp@32: create_element: nvvideorenderersinkbin nv_ds_video_renderer349..28d0

2021-11-22 21:40:13.620 INFO  extensions/nvdsoutputsink/nvvideorenderersinkbin.hpp@54: bin_add: nvvideorenderersinkbin nv_ds_video_renderer349..28d0

2021-11-22 21:40:13.621 INFO  extensions/nvdsvisualization/nvtilerbin.hpp@31: create_element: nvtilerbin nv_ds_tiler352..fc88

2021-11-22 21:40:13.622 INFO  extensions/nvdsvisualization/nvtilerbin.hpp@55: bin_add: nvtilerbin nv_ds_tiler352..fc88

2021-11-22 21:40:13.622 ERROR extensions/nvdsbase/nvds_io.cpp@23: Could not get static pad 'sink' from 'NvDsTriton..fe10/nv_ds_triton341..fba8'
2021-11-22 21:40:13.622 ERROR gxf/std/program.cpp@310: Couldn't run async. Deactivating...
2021-11-22 21:40:13.624 ERROR gxf/core/runtime.cpp@907: Graph run failed with error: GXF_FAILURE
2021-11-22 21:40:13.624 ERROR gxf/gxe/gxe.cpp@163: GxfGraphRunAsync Error: GXF_FAILURE
*******************************************************************
End deepstream-triton.yaml
*******************************************************************
[INFO] Graph installation directory /tmp/ds.deepstream-triton and manifest /tmp/ds.deepstream-triton/manifest.yaml retained 

Can you run “gst-inspect-1.0 nvinferserver” in your device? What is the output?

I got:

No such element or plugin 'nvinferserver'

I even tried to run the triton backend setup found in README once again but still the same response above.

Running registry extn list shows an entry named NvDsTritonExt so not sure why gstreamer does not recognise it.

You need to install NVIDIA® Triton Inference Server (previously called TensorRT Inference Server) Release 2.13.0 to make nvinferserver workable. Gst-nvinferserver — DeepStream 6.0 Release documentation

Are you working with docker container? If yes, which one?

I tried to use containers but faced some issues as well so would like to stick with classic host execution. The instructions of that refrence graph README have something about installing Triton backend which I presume is Triton Inference Server. So not sure why to do another explicit install.

The script at line 49 above grabs a copy of v2.13 from github

Do I need to compile Triton Inference server on jetson device or the above script pulls the required binaries ?

Maybe the plugin is blacklisted somehow

If you are sure the triton backend is intalled correctly. Please use “rm -rf ~/.cache/gstreamer-1.0/” to clear the gstreamer registry table and run “gst-inspect-1.0 nvinferserver” again.

Thanks @Fiona.Chen

Clearing gstreamer cache and then running registry repo sync -n ngc-public fixed the problem of triton gstreamer plugin being blacklisted.

So the error in this thread title has been bypassed now but I got a different Triton error related to memory allocation

ERROR: Triton: failed to create repo server, triton_err_str:Not found, err_msg:unable to load backend library: /usr/lib/aarch64-linux-gnu/libgomp.so.1: cannot allocate memory in static TLS block

Here is the full trace.

2021-11-24 20:35:37.093487: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.2
I1124 10:35:37.209477 14813 tensorflow.cc:2171] TRITONBACKEND_Initialize: tensorflow
I1124 10:35:37.209600 14813 tensorflow.cc:2184] Triton TRITONBACKEND API version: 1.4
I1124 10:35:37.209631 14813 tensorflow.cc:2190] 'tensorflow' TRITONBACKEND API version: 1.4
I1124 10:35:37.209656 14813 tensorflow.cc:2211] backend configuration:
{"cmdline":{"allow-soft-placement":"true","gpu-memory-fraction":"0.350000"}}
I1124 10:35:37.211596 14813 tritonserver.cc:1718] 
+----------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
| Option                           | Value                                                                                                                            |
+----------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
| server_id                        | triton                                                                                                                           |
| server_version                   | 2.13.0                                                                                                                           |
| server_extensions                | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_m |
|                                  | emory cuda_shared_memory binary_tensor_data statistics                                                                           |
| model_repository_path[0]         | /opt/nvidia/deepstream/deepstream-6.0/samples/triton_model_repo                                                                  |
| model_control_mode               | MODE_EXPLICIT                                                                                                                    |
| strict_model_config              | 1                                                                                                                                |
| pinned_memory_pool_byte_size     | 67108864                                                                                                                         |
| cuda_memory_pool_byte_size{0}    | 67108864                                                                                                                         |
| min_supported_compute_capability | 5.3                                                                                                                              |
| strict_readiness                 | 1                                                                                                                                |
| exit_timeout                     | 30                                                                                                                               |
+----------------------------------+----------------------------------------------------------------------------------------------------------------------------------+

I1124 10:35:37.211745 14813 server.cc:231] No server context available. Exiting immediately.
ERROR: Triton: failed to create repo server, triton_err_str:Not found, err_msg:unable to load backend library: /usr/lib/aarch64-linux-gnu/libgomp.so.1: cannot allocate memory in static TLS block
ERROR: failed to initialize trtserver on repo dir: root: "/opt/nvidia/deepstream/deepstream-6.0/samples/triton_model_repo"
log_level: 2
strict_model_config: true
tf_gpu_memory_fraction: 0.35
cuda_device_memory {
  memory_pool_byte_size: 67108864
}
pinned_memory_pool_byte_size: 67108864

0:00:00.547118160 14813   0x7f64632900 ERROR          nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<nvinferserver_bin_nvinferserver> nvinferserver[UID 5]: Error in createNNBackend() <infer_trtis_context.cpp:225> [UID = 5]: model:ssd_inception_v2_coco_2018_01_28 get triton server instance failed. repo:root: "/opt/nvidia/deepstream/deepstream-6.0/samples/triton_model_repo"
log_level: 2
strict_model_config: true
tf_gpu_memory_fraction: 0.35
cuda_device_memory {
  memory_pool_byte_size: 67108864
}
pinned_memory_pool_byte_size: 67108864

0:00:00.547191667 14813   0x7f64632900 ERROR          nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<nvinferserver_bin_nvinferserver> nvinferserver[UID 5]: Error in initialize() <infer_base_context.cpp:81> [UID = 5]: create nn-backend failed, check config file settings, nvinfer error:NVDSINFER_TRITON_ERROR
0:00:00.547222324 14813   0x7f64632900 WARN           nvinferserver gstnvinferserver_impl.cpp:507:start:<nvinferserver_bin_nvinferserver> error: Failed to initialize InferTrtIsContext
0:00:00.547242837 14813   0x7f64632900 WARN           nvinferserver gstnvinferserver_impl.cpp:507:start:<nvinferserver_bin_nvinferserver> error: Config file path: /home/dev/config_infer_primary_detector_ssd_inception_v2_coco_2018_01_28.txt
0:00:00.547723974 14813   0x7f64632900 WARN           nvinferserver gstnvinferserver.cpp:460:gst_nvinfer_server_start:<nvinferserver_bin_nvinferserver> error: gstnvinferserver_impl start failed
2021-11-24 20:35:37.215 ERROR extensions/nvdsbase/nvds_scheduler.cpp@179: Failed to set GStreamer pipeline to PLAYING
Returned, stopping playback
Deleting pipeline
2021-11-24 20:35:37.226 INFO  gxf/gxe/gxe.cpp@182: Deinitializing...
2021-11-24 20:35:37.228 INFO  gxf/gxe/gxe.cpp@189: Closing log file...
2021-11-24 20:35:37.228 INFO  gxf/gxe/gxe.cpp@204: Destroying context
2021-11-24 20:35:37.229 INFO  gxf/gxe/gxe.cpp@211: Done.
*******************************************************************
End deepstream-triton.yaml
*******************************************************************

Sorry I did not google the error first, README file has some hints about this issue on Jetson.

I will give it a go.

It finally worked 😃

Frame rate of the output is around 6fps. I can imagine Triton would be a bit slower than TRT plus the fact of using a bigger model (ssd_inception_v2_coco_2018_01_28). Still was hoping to get something faster than that but neverthless happy it’s working now.

Thank you so much.