Yolov4 deployment issue

• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) 4.6.2-b5
• TensorRT Version 8.2.1-1+cuda10.2
• Issue Type( questions, new requirements, bugs)

I trained a yolov4 tiny detection model and tried to deploy it (before I had a detectnetv2).

When I run deepstream I got the following error:

914> [UID = 2]: Trying to create engine from model files
ERROR: [TRT]: UffParser: Unsupported number of graph 0
parseModel: Failed to parse UFF model
ERROR: Failed to build network, error in model parsing.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:06.316131800 534 0x16ba1270 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 2]: build engine file failed
corrupted size vs. prev_size
Aborted (core dumped)

I read that it may be related to the key, but I didn’t use any during training with tao (getting_started_v5.2.0 doesn’t have any in the notebook), and it doesn’t work without the current key from the config.

sgie_config.txt (3.2 KB)

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

please use “onnx-file” in the configuration file. please refer to this yolov4 onnx DeepStream sample.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.