DeepStream Triton Server and Triton Client cannot be used together

Please provide complete information as applicable to your setup.
• Hardware Platform: GPU
• DeepStream 6.1.1
• TensorRT 8.4.1-1+cuda11.6
• NVIDIA GPU Driver Version: 535.183.01

Issue encountered: I deployed a pose model with Triton Server and wrote a sample app using DeepStream’s nvinferserver plugin to call the Triton Server I set up earlier without any issues. However, when I installed Triton Client from GitHub - triton-inference-server/client: Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala. to use inference for other tasks, I encountered the following error in my app.

The DeepStream app pipeline I wrote as a simple example is: filesrc → streammux → nvinferserver → nvvideoconvert → fakesink. This pipeline runs without any problems. The issue only arises when I use the additional client that I built together with the DeepStream pipeline in the same process

The base Docker image I’m using is nvcr.io/nvidia/deepstream:6.1.1-triton.

Do you have any ideas that could help me resolve this issue? I’m thinking it might be because I had to build an additional client, even though the DeepStream Triton Docker already has one, and that’s why I’m encountering this problem !

I am checking.

I used the Triton client available in the nvcr.io/nvidia/deepstream:6.3-triton-multiarch base Docker image, which resolved the issue mentioned earlier. However, now I am facing another problem. I need to rebuild the nvinferserver plugin to attach additional necessary metadata for the inference process. I made some modifications to the attachDetectionMetadata function in the gstnvinferserver_meta_utils.cpp file, but it seems that the plugin I built is not triggering this function when running the pipeline. Can you help me understand why this is happening? It could be that I didn’t rebuild the plugin correctly, or maybe I’m missing some parameters in the nvinferserver plugin config


I don’t see any log output like the one I debugged in the code, which is ‘MODIFY CODE PLUGIN GST NVINFERSERVER !!!’ after using the plugin I rebuilt.

please narrow down this issue.

  1. after rebuilding, please replace /opt/nvidia/deepstream/deepstream/lib/gst-plugins/libnvdsgst_inferserver.so with the new so.
  2. you can add more logs in upper-layer functions, for example, GstNvInferServerImpl::InferenceDone.
1 Like

Thank you for your feedback, I was able to build the nvinferserver plugin with my custom setup, but I am still having trouble with this part of the code. Why can’t nvinferserver attach mask data like nvinfer? Do you have any solution to help me add mask support for nvinferserver?

Currently nvinferserver does not support instance segmentation. please refer to the doc and opensource code \opt\nvidia\deepstream\deepstream\sources\gst-plugins\gst-nvinferserver\gstnvinferserver_meta_utils.cpp. Here are other solutions.

  1. please refer to NvDsInferStatus inferenceDone of opt\nvidia\deepstream\deepstream\sources\TritonOnnxYolo\nvdsinferserver_custom_impl_yolo\nvdsinferserver_custom_process_yolo.cpp, you can customize postprocessing in NvDsInferStatus inferenceDone.
  2. you can use “nvinferserver+nvdspostprocess”. nvdspostprocess is openousrce. it supports instance segmentation. please refer to he doc and code opt\nvidia\deepstream\deepstream\sources\gst-plugins\gst-nvdspostprocess\config_infer_primary_post_process.txt.
1 Like

I am trying to use both approaches you suggested, but I’m still facing some difficulty as I cannot retrieve the information roi_top, roi_left, offset_top, offset_left, scale_ratio_x, scale_ratio_y in the nvdspostprocess plugin or in inferenceDone like in nvinfer when I use nvinferserver. I hope you can provide a solution so that I can attach this information to the metadata


Because besides the instance segmentation information, I also need to carry the ROI data in order to perform recalculations in the later steps that my pipeline will require.

from the pipeline, there is no ROI information. how did you set the ROI? you can get ROI information from user meta with id NVDS_PREPROCESS_BATCH_META. please refer to gst_nvinfer_process_tensor_input in \opt\nvidia\deepstream\deepstream\sources\gst-plugins\gst-nvinfer\gstnvinfer_meta_utils.cpp.

1 Like

Thank you for your ideas; I will experiment with your solutions in my project to see if any issues remain. I appreciate your enthusiastic support!

Sorry for the late reply, Is this still an DeepStream issue to support? Thanks!

Currently, I have no further issues with my project. If any arise, I will open a new topic. Thanks !