Errors on converting yolov3 caffe xilinx model to trt

Hi ,
I converted my yolov3 caffe xilinx model to onnx , then I use onnx parser to convert my onnx model to tensorrt.While doing so , I resulted in this error,

Environment
TensorRT Version: 7.2.3.4
GPU Type: RTX 3080
Nvidia Driver Version: 470.63.01
CUDA Version: 11.1
CUDNN Version: 8.1.0
Operating System + Version: x86_64 + 18.04
Python Version (if applicable): 3.6.9 (using virtual environment)

ERROR: builtin_op_importers.cpp:516 In function importConv:
[6] Assertion failed: nchan == -1 || kernelWeights.shape.d[1] * ngroup == nchan
[09/30/2021-09:34:59] [E] Failed to parse onnx file
[09/30/2021-09:34:59] [E] Parsing model failed
[09/30/2021-09:34:59] [E] Engine creation failed
[09/30/2021-09:34:59] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # caffe_bvlc/TensorRT-7.2.3.4.Ubuntu-18.04.x86_64-gnu.cuda-11.1.cudnn8.1/TensorRT-7.2.3.4/bin/trtexec --onnx=/home/vvsa/output.onnx --batch=64 --int8 --saveEngine=caffe.plan --workspace=256

How to get through this??

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html#onnx-export

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Hi,

Could you please let us know if this is same issue. If yes, please follow up in the same thread.

Thank you.