TAO exported model used in deepstream failed

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Windows11 WSL2 ubuntu22.04 RTX4060Ti
• DeepStream Version
DeepStream 7.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
8.6.1.6-1+cuda12.0
• NVIDIA GPU Driver Version (valid for GPU only)
555.99
• Issue Type( questions, new requirements, bugs)

Hi, I have this problem to deploy TAO exported model to deepstream. Thanks for your help:

  1. I run the export by following cmds:

!tao model classification_tf1 export
-m $USER_EXPERIMENT_DIR/output_retrain/weights/resnet_$EPOCH.hdf5
-o $USER_EXPERIMENT_DIR/export/final_model.onnx
-e $SPECS_DIR/classification_retrain_spec.cfg
–classmap_json $USER_EXPERIMENT_DIR/output_retrain/classmap.json
–gen_ds_config
and I get the exported files:
Exported model:

total 193M
-rw-r–r-- 1 gty gty 143M Aug 1 17:04 calibration.tensor
-rw-r–r-- 1 gty gty 39M Aug 1 16:42 final_model.onnx
-rw-r–r-- 1 gty gty 12M Aug 1 17:07 final_model.trt
-rw-r–r-- 1 gty gty 1.9K Aug 1 17:04 final_model_int8_cache.bin
-rw-r–r-- 1 gty gty 135 Aug 1 16:42 labels.txt
-rw-r–r-- 1 gty gty 193 Aug 1 16:42 nvinfer_config.txt
-rw-r–r-- 1 gty gty 542 Aug 1 17:07 status.json

  1. In deepstream samples, I modified the config_infer_primary.txt into like:

[property]
gpu-id=0

#tlt-encoded-model=…/…/models/Primary_Detector/resnet18_trafficcamnet.etlt
#model-engine-file=…/…/models/Primary_Detector/resnet18_trafficcamnet.etlt_b30_gpu0_int8.engine
#int8-calib-file=…/…/models/Primary_Detector/cal_trt.bin

batch-size=30
process-mode=1

0=FP32, 1=INT8, 2=FP16 mode

network-mode=1
interval=0
gie-unique-id=1
uff-input-order=0
uff-input-blob-name=input_1
output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
force-implicit-batch-dim=1
#parse-bbox-func-name=NvDsInferParseCustomResnet
#custom-lib-path=/path/to/this/directory/libnvds_infercustomparser.so

1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)

cluster-mode=2
#scaling-filter=0
#scaling-compute-hw=0

onnx-file=…/…/models/tao_exports/final_model.onnx
int8-calib-file=…/…/models/tao_exports/final_model_int8_cache.bin
labelfile-path=…/…/models/tao_exports/labels.txt

net-scale-factor=1.0
offsets=103.939;116.779;123.68
infer-dims=3;224;224
tlt-model-key=
network-type=1
num-detected-classes=20
model-color-format=1
maintain-aspect-ratio=0
output-tensor-meta=0

  1. then start the deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt, I have the error like:
0:00:11.115312125   367 0x55b0673783a0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 4]: Use deserialized engine model: /root/feibot/deepStream/samples/configs/deepstream-app/../../models/Secondary_VehicleTypes/resnet18_vehicletypenet.etlt_b16_gpu0_int8.engine
0:00:11.126531524   367 0x55b0673783a0 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<secondary_gie_0> [UID 4]: Load new model:/root/feibot/deepStream/samples/configs/deepstream-app/config_infer_secondary_vehicletypes.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1494 Deserialize engine failed because file path: /root/feibot/deepStream/samples/configs/deepstream-app/../../models/tao_exports/final_model.etlt_b4_gpu0_int8.engine open error
0:00:15.681805055   367 0x55b0673783a0 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2083> [UID = 1]: deserialize engine from file :/root/feibot/deepStream/samples/configs/deepstream-app/../../models/tao_exports/final_model.etlt_b4_gpu0_int8.engine failed
0:00:15.874624846   367 0x55b0673783a0 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2188> [UID = 1]: deserialize backend context from engine from file :/root/feibot/deepStream/samples/configs/deepstream-app/../../models/tao_exports/final_model.etlt_b4_gpu0_int8.engine failed, try rebuild
0:00:15.874672514   367 0x55b0673783a0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2109> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
ERROR: [TRT]: ModelImporter.cpp:777: ERROR: ModelImporter.cpp:547 In function importModel:
[4] Assertion failed: !mImporterCtx.network()->hasImplicitBatchDimension() && "This version of the ONNX parser only supports TensorRT INetworkDefinitions with an explicit batch dimension. Please ensure the network was created using the EXPLICIT_BATCH NetworkDefinitionCreationFlag."
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:316 Failed to parse onnx file
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:976 failed to build network since parsing model errors.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:809 failed to build network.
0:00:20.904244382   367 0x55b0673783a0 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2129> [UID = 1]: build engine file failed
0:00:21.101645619   367 0x55b0673783a0 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2215> [UID = 1]: build backend context failed
0:00:21.101687648   367 0x55b0673783a0 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1352> [UID = 1]: generate backend failed, check config file settings
0:00:21.101719534   367 0x55b0673783a0 WARN                 nvinfer gstnvinfer.cpp:912:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:21.102616584   367 0x55b0673783a0 WARN                 nvinfer gstnvinfer.cpp:912:gst_nvinfer_start:<primary_gie> error: Config file path: /root/feibot/deepStream/samples/configs/deepstream-app/tryTaoModel_primary.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
[NvMultiObjectTracker] De-initialized
** ERROR: <main:706>: Failed to set pipeline to PAUSED
Quitting

Thanks so much for help how to fix this problem.

please comment out “force-implicit-batch-dim=1”.

This problem fixed. Thanks so much!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.