Implementing YoloV3 with tensorRT on the jetson

I was trying to convert Darknet yoloV3-tiny model to .uff model and had done implementing c++ code such as inferencing and nms algorithm.

but after running, it said UFFParser fail to parse cond/merge layer.

Is there any other way to solve this?

Thank you

Hi,

It requires several customized plugin.
Here is our tutorial for YOLO2 and YOLO3 with TensorRT for your reference:
[url]https://github.com/vat-nvidia/deepstream-plugins[/url]

Thanks.

Hi, I am using the sample code on jetpack 4.2 to convert yolo to onnx and then onnx to trt. I managed to convert yolov3_to_onnx to get a onnx file. However when I run python onnx_to_tensorrt.py, I get the following error ValueError: ‘cannot reshape array of size 16245 into shape (1,255,19,19)’

I have 10 classes and I have adjusted the config by changing the number of classes and filters using the following formula (3*(5+10))

Can you help me or let me know if I need to change the code to get it running for 10 classes.

Hi,

Do you follow the steps shared in our document?
[url]https://docs.nvidia.com/deeplearning/sdk/tensorrt-sample-support-guide/index.html#onnx_mnist_sample[/url]

By the way, it’s recommended to file a new topic specify for your own question.
Thanks.

Hi,

I have exactly the same issue as @nrj127 above (this is with the yolov3 example provided in TensorRT-5.1.5.0/samples/python/yolov3_onnx)

File "onnx_to_tensorrt.py", line 192, in main
    trt_outputs = [output.reshape(shape) for output, shape in zip(trt_outputs, output_shapes)]
ValueError: cannot reshape array of size 6498 into shape (1,255,19,19)

What is the relationship between the values:

# Output shapes expected by the post-processor
    output_shapes = [(1, 255, 19, 19), (1, 255, 38, 38), (1, 255, 76, 76)]

from python example file TensorRT-5.1.5.0/samples/python/yolov3_onnx/onnx_to_tensorrt.py

and the number of output classes specified in the original input yolov3.cfg file used as an input to the earlier TensorRT-5.1.5.0/samples/python/yolov3_onnx/yolov3_to_onnx.py

In the standard example, the yolov3 net is trained for 80 classes (coco), @nrj127 has 10 and I have 1. What changes are needed to this line (#177 from onnx_to_tensorrt.py) for a custom number of output classes ?

I am following the steps at https://docs.nvidia.com/deeplearning/sdk/tensorrt-sample-support-guide/index.html#yolov3_onnx but using a custom trained yolov3 model in place of the downloaded on (which I have verified works elsewhere correctly). I have a matching .cfg and .weights file and I have successfully generated a .onnx file for the network using yolov3_to_onnx.py already (for which you must have onnx=1.4.1, not earlier/not later AFAIK).

Thanks for your help.

[The link you sent regarding the the MNIST example is not relevant to this discussion]

Answering my own query, change line #177 of onnx_to_tensorrt.py as follows:

# Output shapes expected by the post-processor
    number_of_output_classes = 1

    # from the formula in the YOLOv3 paper N x N x [3 * (4 + 1 + #classes)]
    
    output_shapes = [(1, 3 * (4 + 1 + number_of_output_classes), 19, 19), (1, 3 * (4 + 1 + number_of_output_classes), 38, 38), (1, 3 * (4 + 1 + number_of_output_classes), 76, 76)]

Also change file data_processing.py as follows:

# change to read number of classes from file

LABEL_FILE_PATH = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'classes.txt') # TPB
ALL_CATEGORIES = load_label_categories(LABEL_FILE_PATH)

CATEGORY_NUM = len(ALL_CATEGORIES)
# assert CATEGORY_NUM == 80

from lines ~64-70. Make sure you have a file classes.txt with your list of custom classes in it.