Custom new model segmentation in DeepStream

Dear Sir/Madam,

I want to implement a custom model segmentation, what do I need to do?
Detail:

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

DeepStream can support the following types of models:

  • Caffe Model and Caffe Prototxt
  • ONNX
  • UFF file
  • TAO Encoded Model and Key

Please make sure you can generate the above types of models first. And then you can study DeepStream document for how to use DeepStream to deploy the models. Welcome to the DeepStream Documentation — DeepStream 6.1.1 Release documentation

So is a tensorflow custom model supported?

You’d better transfer it to ONNX. Getting Started - TensorFlow | onnxruntime

How can I run deep stream segmentation with ONNX model?

Please refer to the NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream (github.com). There are segmentation model sample and ONNX model sample.

  • I trained unet model. My weight file format is .tlt, But in example unet in “Deepstream_tao_apps”, I saw tlt-encoded-model format is .etlt.
  • I try export .tlt model to .etlt model with code below:
    !tao unet export
    -e $SPECS_DIR/unet_train_resnet_unet_isbi.txt
    -m $EXPERIMENT_DIR/isbi_experiment_unpruned/weights/model_isbi.tlt
    -o $EXPERIMENT_DIR/isbi_experiment_unpruned/model_isbi.etlt
    -k $KEY
  • But when I run “./apps/tao_segmentation/ds-tao-segmentation configs/app/seg_app_unet.yml”. I got an error

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Steps:

  1. I try to train model Unet with TAO toolkit follow(Run in colab): nvidia-tao/unet_isbi.ipynb at main · NVIDIA-AI-IOT/nvidia-tao · GitHub
  2. After train, I export model weight .tlt to .etlt follow:
    !tao unet export
    -e $SPECS_DIR/unet_train_resnet_unet_isbi.txt
    -m $EXPERIMENT_DIR/isbi_experiment_unpruned/weights/model_isbi.tlt
    -o $EXPERIMENT_DIR/isbi_experiment_unpruned/model_isbi.etlt
    –engine_file $EXPERIMENT_DIR/isbi_experiment_unpruned/model_isbi.engine
    –gen_ds_config
    -k $KEY
  3. I setup file pgie_unet_tao_config.yml:
    property:
    gpu-id: 0
    net-scale-factor: 0.007843
    model-color-format: 1
    offsets: 127.5;127.5;127.5
    labelfile-path: unet_labels.txt
    ##Replace following path to your model file
    model-engine-file: …/…/models/unet/model_isbi.engine
    #current DS cannot parse onnx etlt model, so you need to
    #convert the etlt model to TensoRT engine first use tao-convert
    tlt-encoded-model: …/…/models/unet/model_isbi.etlt
    tlt-model-key: tlt_encode
    infer-dims: 3;320;320
    batch-size: 1

0=FP32, 1=INT8, 2=FP16 mode

network-mode: 1
num-detected-classes: 2
interval: 0
gie-unique-id: 1
network-type: 2
output-blob-names: softmax_1
segmentation-threshold: 0.0
##specify the output tensor order, 0(default value) for CHW and 1 for HWC
segmentation-output-order: 1

class-attrs-all:
roi-top-offset: 0
roi-bottom-offset: 0
detected-min-w: 0
detected-min-h: 0
detected-max-w: 0
detected-max-h: 0

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

• Hardware Platform: GPU
• DeepStream Version: 6.1
• TensorRT Version: 7.2.1.4
**• NVIDIA GPU Driver Version (valid for GPU only): Driver Version: 510.108.03 CUDA Version: 11.6 **

DeepStream 6.1 needs TensorRT 7.2.5.1. Quickstart Guide — DeepStream 6.1.1 Release documentation

Please check your device and confirm the real versions.

I can’t find TensorRT in my device. TensorRT Version: 7.2.1.4 is in Colab when I train Unet model.

DeepStream can not work without TensorRT. Are you working with DeepStream docker container? Please check the device or environment which you want to run DeepStream on.

I don’t run DeepStream with Docker, Maybe I run DeepStream with TensorRT but I don’t know how to check TensorRT version

Please try dpkg -l | grep TensorRT

Here you are:

TensorRT version is not correct. Please refer to the DeepStream compatibility table.

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Quickstart.html#id6

I will try change version TensorRT. But why am i still able to run the example of Unet model?

You’ve already met the TensorRT failure from the log you post. “TRT” is the abbreviation of “TensorRT”