I don’t know how to implement the model in DeepStream.
Can you help me? Thanks so much.
• Hardware Platform (Jetson / GPU) • DeepStream Version • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I trained unet model. My weight file format is .tlt, But in example unet in “Deepstream_tao_apps”, I saw tlt-encoded-model format is .etlt.
I try export .tlt model to .etlt model with code below:
!tao unet export
-e $SPECS_DIR/unet_train_resnet_unet_isbi.txt
-m $EXPERIMENT_DIR/isbi_experiment_unpruned/weights/model_isbi.tlt
-o $EXPERIMENT_DIR/isbi_experiment_unpruned/model_isbi.etlt
-k $KEY
But when I run “./apps/tao_segmentation/ds-tao-segmentation configs/app/seg_app_unet.yml”. I got an error
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) • DeepStream Version • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
After train, I export model weight .tlt to .etlt follow:
!tao unet export
-e $SPECS_DIR/unet_train_resnet_unet_isbi.txt
-m $EXPERIMENT_DIR/isbi_experiment_unpruned/weights/model_isbi.tlt
-o $EXPERIMENT_DIR/isbi_experiment_unpruned/model_isbi.etlt
–engine_file $EXPERIMENT_DIR/isbi_experiment_unpruned/model_isbi.engine
–gen_ds_config
-k $KEY
I setup file pgie_unet_tao_config.yml:
property:
gpu-id: 0
net-scale-factor: 0.007843
model-color-format: 1
offsets: 127.5;127.5;127.5
labelfile-path: unet_labels.txt
##Replace following path to your model file
model-engine-file: …/…/models/unet/model_isbi.engine #current DS cannot parse onnx etlt model, so you need to #convert the etlt model to TensoRT engine first use tao-convert
tlt-encoded-model: …/…/models/unet/model_isbi.etlt
tlt-model-key: tlt_encode
infer-dims: 3;320;320
batch-size: 1
0=FP32, 1=INT8, 2=FP16 mode
network-mode: 1
num-detected-classes: 2
interval: 0
gie-unique-id: 1
network-type: 2
output-blob-names: softmax_1
segmentation-threshold: 0.0
##specify the output tensor order, 0(default value) for CHW and 1 for HWC
segmentation-output-order: 1
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) • DeepStream Version • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only)
DeepStream can not work without TensorRT. Are you working with DeepStream docker container? Please check the device or environment which you want to run DeepStream on.