Iplugin tensorrt engine error for ds5.0

@ashingtsai
Which platform is this float infinity error from?
Jetson NX?
I have tried on Tesla T4 with pytorch 1.4 and TensorRT 7.0, there is no problem.

@ersheng

yes,I run it on Jetson NX.

@ersheng

thank you for your reply.
seems I can get the right result now.
but I can not sure what I really do.
Maybe relative I rebuild trtexec with pytorch 1.4 onnx 1.6 or something.
should be rebuild trtexec by
make TARGET=aarch64
however,I still can not see any result on detection picture.
(predictions_trt.jpg)
I will try to debug it ,too.

python3 demo_trt.py yolov4.trt dog.jpg
Reading engine from file yolov4.trt
Shape of the network input: (1, 3, 416, 416)
Length of inputs: 1
Len of outputs: 9
truck: 0.980496
dog: 0.999999
dog: 0.999999
bicycle: 1.000000
bicycle: 1.000000
truck: 0.946777
bicycle: 1.000000
truck: 0.951660
truck: 0.947754
dog: 1.000000
dog: 1.000000
bicycle: 0.999999
bicycle: 1.000000
bicycle: 1.000000
bicycle: 1.000000
bicycle: 1.000000
bicycle: 1.000000
save plot results to predictions_trt.jpg

@ashingtsai

You can pull the latest README and source from https://github.com/Tianxiaomo/pytorch-YOLOv4.

YoloV4 TensorRT demo can run on X86 smoothly.
Send me information if you still have problems running YoloV4 on Jetson platforms.

@ersheng

better ,but incorrect.

Google Photos

hi @mchi
Can you share it with me please,
I have been trying it for a while now
thanks

Hi @pinktree3,
Eric has shared to you in previous comment - Iplugin tensorrt engine error for ds5.0 - #5 by ersheng

1 Like

The batchsize is fixed when exporting darknet weights to onnx model, what if I want to inference different numbers of videos (dynamic batch size) using this engine file with deepstream?

exporting onnx with this command:

python demo_darknet2onnx.py cfg/yolov4-hat.cfg yolov4-hat_7000.weights 233.png 1

export to trt engine:

/usr/src/tensorrt/bin/trtexec --onnx=yolov4_1_3_416_416_static.onnx --explicitBatch --minShapes=input:1x3x416x416 --optShapes=input:32x3x416x416 --maxShapes=input:32x3x416x416 --workspace=2048 --saveEngine=yolov4-hat.engine --fp16

When I deploy with deepstream, error info:

0:00:20.552723060 12988 0x5608854d5e40 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1642> [UID = 1]: Backend has maxBatchSize 1 whereas 24 has been requested
0:00:20.552736568 12988 0x5608854d5e40 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1813> [UID = 1]: deserialized backend context :/opt/nvidia/deepstream/deepstream-5.0/sources/Yolov4-hat/yolov4-hat.engine failed to match config params, trying rebuild
0:00:20.557183440 12988 0x5608854d5e40 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:934 failed to build network since there is no model file matched.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:872 failed to build network.
0:00:20.557493110 12988 0x5608854d5e40 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
0:00:20.557510437 12988 0x5608854d5e40 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1821> [UID = 1]: build backend context failed
0:00:20.557520575 12988 0x5608854d5e40 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1148> [UID = 1]: generate backend failed, check config file settings
0:00:20.557626494 12988 0x5608854d5e40 WARN                 nvinfer gstnvinfer.cpp:809:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:20.557635838 12988 0x5608854d5e40 WARN                 nvinfer gstnvinfer.cpp:809:gst_nvinfer_start:<primary_gie> error: Config file path: /opt/nvidia/deepstream/deepstream-5.0/sources/Yolov4-hat/config_infer_primary_yoloV4.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: <main:655>: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Failed to create NvDsInferContext instance

The batch size in my deepstream app config is 24.

Hi CoderJustin,

Please help to open a new topic for your issue. Thanks