Hi,
I have exactly the same issue as @nrj127 above (this is with the yolov3 example provided in TensorRT-5.1.5.0/samples/python/yolov3_onnx)
File "onnx_to_tensorrt.py", line 192, in main
trt_outputs = [output.reshape(shape) for output, shape in zip(trt_outputs, output_shapes)]
ValueError: cannot reshape array of size 6498 into shape (1,255,19,19)
What is the relationship between the values:
# Output shapes expected by the post-processor
output_shapes = [(1, 255, 19, 19), (1, 255, 38, 38), (1, 255, 76, 76)]
from python example file TensorRT-5.1.5.0/samples/python/yolov3_onnx/onnx_to_tensorrt.py
and the number of output classes specified in the original input yolov3.cfg file used as an input to the earlier TensorRT-5.1.5.0/samples/python/yolov3_onnx/yolov3_to_onnx.py
In the standard example, the yolov3 net is trained for 80 classes (coco), @nrj127 has 10 and I have 1. What changes are needed to this line (#177 from onnx_to_tensorrt.py) for a custom number of output classes ?
I am following the steps at https://docs.nvidia.com/deeplearning/sdk/tensorrt-sample-support-guide/index.html#yolov3_onnx but using a custom trained yolov3 model in place of the downloaded on (which I have verified works elsewhere correctly). I have a matching .cfg and .weights file and I have successfully generated a .onnx file for the network using yolov3_to_onnx.py already (for which you must have onnx=1.4.1, not earlier/not later AFAIK).
Thanks for your help.
[The link you sent regarding the the MNIST example is not relevant to this discussion]