Deepstream 5.1 won't deserialize Tlt's YOLOv4 fp16 etlt file to retrieve .engine

• Hardware Platform (Jetson / GPU)
GPU RTX 3060 12GB
• DeepStream Version
DeepStream 5.1, using nvidia-docker
• JetPack Version (valid for Jetson only)
• TensorRT Version
7.2.3-1+cuda11.1
• NVIDIA GPU Driver Version (valid for GPU only)
460.73.01,
CUDA 11.1.105
• Issue Type( questions, new requirements, bugs)
bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

  • train a Yolov4 model using TLT,
  • get the etlt file,
  • add it to primary gie config,
  • start deepstream-app

The issue
I am having trouble with a tlt trained model to run on deepstream.
First I must say I have run yoloV4 models on deepstream before, I already updated the libnvinfer_plugin.so compiled from TensorOss.

My problem now is with a new model, same architecture, same jupyter notebook, I just updated the training dataset and my card (from RTX2060 to 3060) and ran the notebook again… successfully…

Problem begins when I move the model to deepstream’s docker.
I am getting this output before the video streaming can begin.

ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1523 Deserialize engine failed because file path: /opt/telconet/aiagent/configs/../model/yolov4_resnet18_epoch_080.etlt_b5_gpu0_fp16.engine open error
0:00:01.050577638   254 0x5638d1651400 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 1]: deserialize engine from file :/opt/telconet/aiagent/configs/../model/yolov4_resnet18_epoch_080.etlt_b5_gpu0_fp16.engine failed
0:00:01.050635480   254 0x5638d1651400 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 1]: deserialize backend context from engine from file :/opt/telconet/aiagent/configs/../model/yolov4_resnet18_epoch_080.etlt_b5_gpu0_fp16.engine failed, try rebuild
0:00:01.050646599   254 0x5638d1651400 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: encoded_bg/concat: all concat input tensors must have the same dimensions except on the concatenation axis (1), but dimensions mismatched at index 0. Input 0 shape: [2457,6,1], Input 1 shape: [2640,10,1]
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: encoded_bg/concat: all concat input tensors must have the same dimensions except on the concatenation axis (1), but dimensions mismatched at index 0. Input 0 shape: [2457,6,1], Input 1 shape: [2640,10,1]
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: encoded_bg/concat: all concat input tensors must have the same dimensions except on the concatenation axis (1), but dimensions mismatched at index 0. Input 0 shape: [2457,6,1], Input 1 shape: [2640,10,1]
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: encoded_bg/concat: all concat input tensors must have the same dimensions except on the concatenation axis (1), but dimensions mismatched at index 0. Input 0 shape: [2457,6,1], Input 1 shape: [2640,10,1]
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: encoded_bg/concat: all concat input tensors must have the same dimensions except on the concatenation axis (1), but dimensions mismatched at index 0. Input 0 shape: [2457,6,1], Input 1 shape: [2640,10,1]
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: encoded_bg/concat: all concat input tensors must have the same dimensions except on the concatenation axis (1), but dimensions mismatched at index 0. Input 0 shape: [2457,6,1], Input 1 shape: [2640,10,1]
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: encoded_bg/concat: all concat input tensors must have the same dimensions except on the concatenation axis (1), but dimensions mismatched at index 0. Input 0 shape: [2457,6,1], Input 1 shape: [2640,10,1]
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: encoded_bg/concat: all concat input tensors must have the same dimensions except on the concatenation axis (1), but dimensions mismatched at index 0. Input 0 shape: [2457,6,1], Input 1 shape: [2640,10,1]
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: encoded_bg/concat: all concat input tensors must have the same dimensions except on the concatenation axis (1), but dimensions mismatched at index 0. Input 0 shape: [2457,6,1], Input 1 shape: [2640,10,1]
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: UffParser: Parser error: yolo_conv2_bn/batchnorm/mul_1: The input to the Scale Layer is required to have a minimum of 3 dimensions.
parseModel: Failed to parse UFF model
ERROR: tlt/tlt_decode.cpp:274 failed to build network since parsing model errors.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API
0:00:01.473548014   254 0x5638d1651400 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1736> [UID = 1]: build engine file failed
Segmentation fault (core dumped)

This is my config file for the GIE

[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=../model/yolov4_labels_copec.txt
tlt-encoded-model=../model/yolov4_resnet18_epoch_080.etlt
model-engine-file=../model/yolov4_resnet18_epoch_080.etlt_b5_gpu0_fp16.engine
tlt-model-key=<key>
uff-input-dims=3;704;1280;0
uff-input-blob-name=Input
batch-size=5
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=5
interval=0
gie-unique-id=1
is-classifier=0
#network-type=0
output-blob-names=BatchedNMS
cluster-mode=2
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=../lib/libnvds_infercustomparser_tlt.so

[class-attrs-all]
pre-cluster-threshold=0.5
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=100
detected-min-h=100
#detected-max-w=1000
#detected-max-h=1000

Thank you in advance

Did you compile the oss lib in your new GPU(RTX3060)?
Also, you can use our demo yoloV4 model to verify whether your env is good.
https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps

Hello @yuweiw. Thank you for your answer.
Tensor OSS was compiled on the 2060, but that was not the problem.

My problem was related to the input dimensions.
I had my primary-gie configured with uff-input-dims=3;704;1280;0
but my tlt training specs file was configured with 3;672;1248

I updated my primary-gie config and the deepstream was able to generate the .engine

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.