Convert darknet weights into onnx failed

I use the official yolov4.weights to convert to onnx by Tianxiaomo/pytorch-YOLOv4 .I can get the onnx file but there is no detection box in the predicted image. I can also convert onnx to engine. But an error occurs when running in deepstream.

darknet2onnx warning:

```
Loading weights from /data/modle/yolov4new.weights... Done!
/opt/conda/lib/python3.7/site-packages/numpy/core/function_base.py:22: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  i = operator.index(i)
/data/pytorch-YOLOv4/tool/yolo_layer.py:227: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
  bx = bxy[:, ii : ii + 1] + torch.tensor(grid_x, device=device, dtype=torch.float32) # grid_x.to(device=device, dtype=torch.float32)
/data/pytorch-YOLOv4/tool/yolo_layer.py:229: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
  by = bxy[:, ii + 1 : ii + 2] + torch.tensor(grid_y, device=device, dtype=torch.float32) # grid_y.to(device=device, dtype=torch.float32)
```

deepstream error:

WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:34 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
0:00:06.725939415  7229 0x562f994a3930 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1577> [UID = 1]: deserialized trt engine from :/root/deepstream_5.0/sources/deepstream_yolov4/yolov4_16_3_608_608_static.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input           3x608x608       
1   OUTPUT kFLOAT boxes           22743x1x4       
2   OUTPUT kFLOAT confs           22743x80        

0:00:06.726037835  7229 0x562f994a3930 WARN                 nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1518> [UID = 1]: Backend has maxBatchSize 1 whereas 16 has been requested
0:00:06.726052808  7229 0x562f994a3930 WARN                 nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1689> [UID = 1]: deserialized backend context :/root/deepstream_5.0/sources/deepstream_yolov4/yolov4_16_3_608_608_static.engine failed to match config params, trying rebuild
0:00:06.749306157  7229 0x562f994a3930 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:934 failed to build network since there is no model file matched.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:872 failed to build network.
0:00:06.749732112  7229 0x562f994a3930 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1611> [UID = 1]: build engine file failed
0:00:06.749754829  7229 0x562f994a3930 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1697> [UID = 1]: build backend context failed
0:00:06.749769513  7229 0x562f994a3930 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1024> [UID = 1]: generate backend failed, check config file settings
0:00:06.749917336  7229 0x562f994a3930 WARN                 nvinfer gstnvinfer.cpp:781:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:06.749932051  7229 0x562f994a3930 WARN                 nvinfer gstnvinfer.cpp:781:gst_nvinfer_start:<primary_gie> error: Config file path: /opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_yolov4/config_infer_primary_yoloV4.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: <main:651>: Failed to set pipeline to PAUSED

Hi,

Do you mean the detection is not working when running the onnx file with Tianxiaomo/pytorch-YOLOv4’s script?
If yes, would you mind checking this issue with the author?

It’s recommended to clarify the ONNX issue first before checking the Deepstream failure.

Thanks.