Problem with running etlt classification model in deepstream-app

**• Hardware Platform **
GPU + docker

• DeepStream Version
6.2

• TensorRT Version

**• NVIDIA GPU Driver Version **

• Issue Type
Running deepstream-app with etlt classification model ends up with segmentation fault. Program crashes while trying to generate trt engine file for etlt file. Program succesully generates engines for CarMake and CarColor with caffe files.

• Reproduce
Run nvcr.io/nvidia/deepstream:6.2-devel docker with:

docker container run --rm --net host --runtime=nvidia --gpus=1 -v /tmp/.X11-unix:/tmp/.X11-unix  -e DISPLAY=$DISPLAY -it nvcr.io/nvidia/deepstream:6.2-devel bash

Download vehicletypenet etlt model using ngc:

wget https://ngc.nvidia.com/downloads/ngccli_cat_linux.zip
unzip -u -q ngccli_cat_linux.zip
ngc-cli/ngc registry model download-version nvidia/tao/vehicletypenet:pruned_v1.0.1 --dest samples/models/

Prepere deepstream-app configuration files:

cd samples/configs/deepstream-app 

Change following lines in config_infer_secondary_vehicletypes.txt:

model-file=../../models/Secondary_VehicleTypes/resnet18.caffemodel
proto-file=../../models/Secondary_VehicleTypes/resnet18.prototxt
model-engine-file=../../models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine
int8-calib-file=../../models/Secondary_VehicleTypes/cal_trt.bin
mean-file=../../models/Secondary_VehicleTypes/mean.ppm
labelfile-path=../../models/Secondary_VehicleTypes/labels.txt

to

tlt-encoded-model=../../models/vehicletypenet_vpruned_v1.0.1/resnet18_vehicletypenet_pruned.etlt
model-engine-file=../../models/vehicletypenet_vpruned_v1.0.1/resnet18_vehicletypenet_pruned.etlt_b16_gpu0_int8.engine
int8-calib-file=../../models/vehicletypenet_vpruned_v1.0.1/vehicletypenet_int8.txt
labelfile-path=../../models/vehicletypenet_vpruned_v1.0.1/labels.txt
tlt-model-key=tlt_encode

Change 173 line in source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt:

model-engine-file=../../models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine

to

model-engine-file=../../models/vehicletypenet_vpruned_v1.0.1/resnet18_vehicletypenet_pruned.etlt_b16_gpu0_int8.engine

Run deepstream-app with

deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt

Deepstream-app succesully generates engines for CarMake and CarColor but crashes wile generating engine for vehicletype model:

WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1487 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/../../models/vehicletypenet_vpruned_v1.0.1/resnet18_vehicletypenet_pruned.etlt_b16_gpu0_int8.engine open error
0:01:17.173448936   809 0x7f2c440024a0 WARN                 nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 4]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/../../models/vehicletypenet_vpruned_v1.0.1/resnet18_vehicletypenet_pruned.etlt_b16_gpu0_int8.engine failed
0:01:17.203249883   809 0x7f2c440024a0 WARN                 nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 4]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/../../models/vehicletypenet_vpruned_v1.0.1/resnet18_vehicletypenet_pruned.etlt_b16_gpu0_int8.engine failed, try rebuild
0:01:17.203264277   809 0x7f2c440024a0 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 4]: Trying to create engine from model files
ERROR: [TRT]: 3: [builder.cpp::~Builder::307] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/builder.cpp::~Builder::307, condition: mObjectCounter.use_count() == 1. Destroying a builder object before destroying objects it created leads to undefined behavior.
)
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:179 Uff input blob name is empty
ERROR: [TRT]: 3: [builder.cpp::~Builder::307] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/builder.cpp::~Builder::307, condition: mObjectCounter.use_count() == 1. Destroying a builder object before destroying objects it created leads to undefined behavior.
)
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:728 Failed to create network using custom network creation function
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:794 Failed to get cuda engine from custom library API
0:01:18.947761323   809 0x7f2c440024a0 ERROR                nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 4]: build engine file failed
Segmentation fault (core dumped)

Please add the below config and check again (by comparing the config from docker nvcr.io/nvidia/deepstream:5.1-21.02-samples and made some adaption for DS6.2)

uff-input-blob-name=input_1
infer-dims=3;224;224
uff-input-order=0

Also, if you use the dirver 535 and CUDA 12.2, you’d better use the DeepStream 6.4.
dGPU model Platform and OS Compatibility

This solved the problem, thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.