CitySemSegFormer etlt use in Triton

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version Any
• NVIDIA GPU Driver Version (valid for GPU only) ANy
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Can the CitySemSegFormer etlt be used with Triton? If so, what are the configuration requirements to install into the Triton model repository (as per the GitHub repo

NVIDIA-AI-IOT/tao-toolkit-triton-apps

. Thanks.

currently, there is no ready Semantic segmentation triton configuration.
are you using deepstream to develop? there are many detection and segment instances triton configuration, about triton model configuration, please refer to tao-toolkit-triton-apps/configuring_the_client.md at main · NVIDIA-AI-IOT/tao-toolkit-triton-apps · GitHub, about postprocess, please refer to deepstream_tao_apps/deepstream_seg_app.c at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub, about deepstream nvinferserver configuration, please refer to deepstream\deepstream\configs\deepstream-app-triton\source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt and GitHub - NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)

• DeepStream Version

• JetPack Version (valid for Jetson only)

• TensorRT Version

• NVIDIA GPU Driver Version (valid for GPU only)

• Issue Type( questions, new requirements, bugs)

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Thank you, that is helpful.

Indeed I tried the deepstream approach (I am familiar with the SDK and written many pipelines in Python and C++). However I followed the instructions to use the etlt in Deepstream (using the OSS DS/TAO repo) and unfortunately get a blank video playing (and to the filesink).

I’m using the Deepstream Container

nvcr.io/nvidia/deepstream:6.1.1-devel

and using VS Code Dev Containers for the environment. I am setting xhost + prior to attaching to container and I tried the PeopleSeg model which runs perfectly as a test (therefore there are no xhost issues or complications due to remote containers). I’m using the nvinfer_config and the label files that come with the etlt and the paths are set correctly.

There are no errors during the pipeline startup and the display appears but the video is blanks as well as the saved file (if the -d switch is not used).

My configuration:

  • Hardware Platform (Jetson / GPU) Dual A6000 GPUs

• DeepStream Version nvcr.io/nvidia/deepstream:6.1.1-devel container

• JetPack Version (valid for Jetson only) n/A

• TensorRT Version 8.4.1-1+cuda11.6

• NVIDIA GPU Driver Version (valid for GPU only) 520.56.06 CUDA Version: 11.8

Thank you.

Please see image that shows the pipeline running but no video in player.

Update- Solved
The problem was in the deeptream_seg_app.c file (the pipeline). You have to change the mux and tiler properties to suit the input shape and recompile.