Please provide complete information as applicable to your setup. • Hardware Platform: GPU • DeepStream 6.1.1 • TensorRT 8.4.1-1+cuda11.6 • NVIDIA GPU Driver Version: 535.183.01
The DeepStream app pipeline I wrote as a simple example is: filesrc → streammux → nvinferserver → nvvideoconvert → fakesink. This pipeline runs without any problems. The issue only arises when I use the additional client that I built together with the DeepStream pipeline in the same process
Do you have any ideas that could help me resolve this issue? I’m thinking it might be because I had to build an additional client, even though the DeepStream Triton Docker already has one, and that’s why I’m encountering this problem !
I used the Triton client available in the nvcr.io/nvidia/deepstream:6.3-triton-multiarch base Docker image, which resolved the issue mentioned earlier. However, now I am facing another problem. I need to rebuild the nvinferserver plugin to attach additional necessary metadata for the inference process. I made some modifications to the attachDetectionMetadata function in the gstnvinferserver_meta_utils.cpp file, but it seems that the plugin I built is not triggering this function when running the pipeline. Can you help me understand why this is happening? It could be that I didn’t rebuild the plugin correctly, or maybe I’m missing some parameters in the nvinferserver plugin config
Thank you for your feedback, I was able to build the nvinferserver plugin with my custom setup, but I am still having trouble with this part of the code. Why can’t nvinferserver attach mask data like nvinfer? Do you have any solution to help me add mask support for nvinferserver?
Currently nvinferserver does not support instance segmentation. please refer to the doc and opensource code \opt\nvidia\deepstream\deepstream\sources\gst-plugins\gst-nvinferserver\gstnvinferserver_meta_utils.cpp. Here are other solutions.
please refer to NvDsInferStatus inferenceDone of opt\nvidia\deepstream\deepstream\sources\TritonOnnxYolo\nvdsinferserver_custom_impl_yolo\nvdsinferserver_custom_process_yolo.cpp, you can customize postprocessing in NvDsInferStatus inferenceDone.
you can use “nvinferserver+nvdspostprocess”. nvdspostprocess is openousrce. it supports instance segmentation. please refer to he doc and code opt\nvidia\deepstream\deepstream\sources\gst-plugins\gst-nvdspostprocess\config_infer_primary_post_process.txt.
I am trying to use both approaches you suggested, but I’m still facing some difficulty as I cannot retrieve the information roi_top, roi_left, offset_top, offset_left, scale_ratio_x, scale_ratio_y in the nvdspostprocess plugin or in inferenceDone like in nvinfer when I use nvinferserver. I hope you can provide a solution so that I can attach this information to the metadata
Because besides the instance segmentation information, I also need to carry the ROI data in order to perform recalculations in the later steps that my pipeline will require.
from the pipeline, there is no ROI information. how did you set the ROI? you can get ROI information from user meta with id NVDS_PREPROCESS_BATCH_META. please refer to gst_nvinfer_process_tensor_input in \opt\nvidia\deepstream\deepstream\sources\gst-plugins\gst-nvinfer\gstnvinfer_meta_utils.cpp.