Error when running custom YOLOv4 on deepstream_python_apps

• Hardware Platform (Jetson / GPU): GPU
• DeepStream Version: docker
• TensorRT Version: 7.2.3

Hi everyone. After training my custom YOLOv4 model with TAO toolkit, I run it with and I face this error:

Starting pipeline 

gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.1/lib/
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvDCF][Warning] `minTrackingConfidenceDuringInactive` is deprecated
[NvDCF] Initialized
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: Assertion failed: d == a + length

Aborted (core dumped)

I followed YOLOv4 — TAO Toolkit 3.0 documentation for installation and optimization. Here is my pgie config file: yolov4_pgie_config.txt (1.3 KB)

Is there anyone met this issue before? Could you guys please help me with this problem? Thanks in advance!

did you refer to deepstream_tao_apps/TRT-OSS/x86 at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub to upgrade TRT plugin?

Can you run the YoloV4 TAO sample with GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream ?
Can you run your application with the TAO YoloV4 model downloaded by GitHub - NVIDIA-AI-IOT/deepstream_tao_apps at release/tao3.0 ?

Hi @mchi, thanks for your response.

Yes, I followed it.

I’m stucking in converting model stage. With Unet, I can convert it successfully but with yolov4, I have an error:

./tao-converter -e models/yolov4/yolov4_resnet18.etlt_b1_gpu0_fp16.engine -p Input,1x3x384x1248,8x3x384x1248,16x3x384x1248 -t fp16 -k tlt_encode -m 1 tlt_encode models/yolov4/yolov4_resnet18.etlt
[INFO] [MemUsageChange] Init CUDA: CPU +332, GPU +0, now: CPU 338, GPU 873 (MiB)
[libprotobuf ERROR google/protobuf/] Error parsing text-format onnx2trt_onnx.ModelProto: 1:2: Invalid control characters encountered in text.
[libprotobuf ERROR google/protobuf/] Error parsing text-format onnx2trt_onnx.ModelProto: 1:3: Interpreting non ascii codepoint 165.
[libprotobuf ERROR google/protobuf/] Error parsing text-format onnx2trt_onnx.ModelProto: 1:3: Message type "onnx2trt_onnx.ModelProto" has no field named "U".
[ERROR] ModelImporter.cpp:682: Failed to parse ONNX model from file: /tmp/file3ZNYMb
[ERROR] Failed to parse the model, please check the encoding key to make sure it's correct
[ERROR] Number of optimization profiles does not match model input node number.
Aborted (core dumped)

Is encoding key missing?

Here is my update, I can run my custom model both with deepstream_python_apps and deepstream_tao_apps but it doesn’t predict any bounding box.

Could you please help me? Thank you very much!

Hi @mchi , I found the issue is that: when I export and convert my model to int8 and run inference, it doesn’t predict any bounding box, even when I use training set for calibration. But when I switch to fp16, it run successfully with inference and deepstream.

Here is my commands for exporting and converting model to int8 (I follow the Jupyter script):

!tao yolo_v4 export -m $USER_EXPERIMENT_DIR/experiment_dir_retrain/weights/yolov4_resnet18_epoch_$EPOCH.tlt  \
                    -o $USER_EXPERIMENT_DIR/export/yolov4_resnet18_epoch_$EPOCH.etlt \
                    -e $SPECS_DIR/yolo_v4_retrain_resnet18_kitti.txt \
                    -k $KEY \
                    --cal_image_dir  $USER_EXPERIMENT_DIR/data/training/image_2 \
                    --data_type int8 \
                    --batch_size 8 \
                    --batches 10 \
                    --cal_cache_file $USER_EXPERIMENT_DIR/export/cal.bin  \
                    --cal_data_file $USER_EXPERIMENT_DIR/export/cal.tensorfile \
                    --verbose \


!tao converter -k $KEY  \
                   -p Input,1x3x384x1248,8x3x384x1248,16x3x384x1248 \
                   -c $USER_EXPERIMENT_DIR/export/cal.bin \
                   -e $USER_EXPERIMENT_DIR/export/trt.engine \
                   -b 2 \
                   -o BatchedNMS \
                   -m 8 \
                   -t int8 \

Is YOLOv4 doesn’t support int8 or did I do something wrong? Could you please help me with this problem? Thank you very much!

Yolo_v4 can support int8.
For your latest issue(inf8 does not predict any bbox) , I am afraid you are training with KITTI dataset. Actually I cannot reproduce. I think you are triggering Jupyter script in your host PC. Is the dgpu of your host PC supporting int8 ?

Hi @Morganh

My dgpu is RTX 2080Ti and I think it supports int8.

Actually, my labels is KITTI format but I only have type and bbox. The others arguments like truncated, occluded, … and so on, I set them equal to 0 because I don’t have information about them. Is this a problem in calibration when I have the dataset as mentioned?

RTX 2080Ti should support int8.
For label, please refer to Data Annotation Format — TAO Toolkit 3.0 documentation
pedestrian 0.00 0 0.00 423.17 173.67 433.17 224.03 0.00 0.00 0.00 0.00 0.00 0.00 0.00
The Truncation can be 0.0.
The Occlusion can be 0.

Please follow jupyter notebook to train and generate cal.bin via KITTI dataset.
If there is still cal.bin question, please create a new topic in TAO forum instead. We can track in it.

Thank you very much, I will adjust that. I’ll let you guys know if I still have calibration problem.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.