Tiny Yolo v3 in Python for Jetson Nano

Hi all

I’m wanting to optimise a tiny-yolo-v3 model to run inference in python on the Jetson Nano with my own weights.

I’ve found numerous links to this topic in forums, but most seem out of date since this model is included in the DeepStream SDK. However all I want to do is to optimise the model rather than completely rewrite for (yet another) sdk!

Can anyone point me to a recent tutorial on how to achieve this?

Many thanks

Hi,

Are you looking for a pure TensorRT python example for YOLOv3?

We don’t have a sample for tiny-yolo-v3 but one for yolo-v3.
Would you mind to have a look first?
/usr/src/tensorrt/samples/python/yolov3_onnx

Thanks.

Hi

I ran this and got the following error using the standard YOLOv3 to check that it was working (ie not a tiny version):

onnx.onnx_cpp2py_export.checker.ValidationError: Op registered for Upsample is depracted in domain_version of 10

==> Context: Bad node spec: input: “085_convolutional_lrelu” input: “086_upsample_scale” output: “086_upsample” name: “086_upsample” op_type: “Upsample” attribute { name: “mode” s: “nearest” type: STRING }

One of the requirements onyx==1.4.1 is not available, so I installed 1.5. Could this be the issue?

Hi,

This error is caused by the incompatible onnx package.
You will need to install the version v1.4.1.

May I know the environment of your system?
I can install onnx v1.4.1 without issue.

sudo pip3 install onnx==1.4.1

Thanks.

Hi

Thanks for getting back to me. This is what I get:

nano@nano-desktop:~$ sudo pip3 install onyx==1.4.1
[sudo] password for nano: 
Collecting onyx==1.4.1
  ERROR: Could not find a version that satisfies the requirement onyx==1.4.1 (from versions: 0.0.5, 0.0.17, 0.0.19, 0.0.20, 0.0.21, 0.1, 0.1.1, 0.1.3, 0.1.4, 0.1.5, 0.2, 0.2.1, 0.3, 0.3.2, 0.3.3, 0.3.4, 0.3.5, 0.3.6, 0.3.7, 0.3.8, 0.3.9, 0.3.10, 0.3.11, 0.3.12, 0.4, 0.4.1, 0.4.2, 0.4.4, 0.4.5, 0.4.6, 0.4.7, 0.5, 0.6.1, 0.6.2, 0.6.3, 0.6.4, 0.7.3, 0.7.4, 0.7.5, 0.7.6, 0.7.7, 0.7.8, 0.7.10, 0.7.11, 0.7.12, 0.7.13, 0.8.5, 0.8.7, 0.8.10, 0.8.11)
ERROR: No matching distribution found for onyx==1.4.1
WARNING: You are using pip version 19.2.1, however version 19.2.3 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.

My environment is python 3.6 (I believe it’s a pretty vanilla install on the Nano):

nano@nano-desktop:~$ python3 --version
Python 3.6.8

What am I doing wrong?

Hi,

You will need an onnx package rather than onyx.

sudo pip3 install onnx==1.4.1

Thanks.

I’m an idiot - thank you!

So having got that working with the yolov3 model, I have attempted to get it working for yolov3-tiny for my own model with 2 classes. The yolo_to_onnx conversion works fine, but when I convert onnx to trt I get the following:

nano@nano-desktop:/usr/src/tensorrt/samples/python/yolov3_onnx$ sudo python3 onnx_to_tensorrt.py
[sudo] password for nano: 
Loading ONNX file from path yolov3.onnx...
Beginning ONNX file parsing
Completed parsing of ONNX file
Building an engine from file yolov3.onnx; this may take a while...
[TensorRT] ERROR: Network must have at least one output
Completed creating Engine
Traceback (most recent call last):
  File "onnx_to_tensorrt.py", line 179, in <module>
    main()
  File "onnx_to_tensorrt.py", line 150, in main
    with get_engine(onnx_file_path, engine_file_path) as engine, engine.create_execution_context() as context:
  File "onnx_to_tensorrt.py", line 125, in get_engine
    return build_engine()
  File "onnx_to_tensorrt.py", line 116, in build_engine
    f.write(engine.serialize())
AttributeError: 'NoneType' object has no attribute 'serialize'

The changes I have made to get this far are:

If anyone has any idea what is causing the [TensorRT] ERROR: Network must have at least one output in my model and not the yolov3 model, any help would be hugely appreciated!

1 Like

Hi,I saw that you have tested the yolov3_onnx,what time was it taken to inference one picture?

I have tested the yolov3_onnx in Jetson Nano ,but it turned out to be 0.3fps,so I think it can not used in realtime detection.am I have wrong operation?

looking forward for your result,thanks.

Hi,

It looks like you don’t declare a valid output layer when converting the model.
If the model has a customized class number, please update the script with a corresponding config.

Ex. yolov3_to_onnx.py

# In above layer_config, there are three outputs that we need to know the output
# shape of (in CHW format):
output_tensor_dims = OrderedDict()
output_tensor_dims['082_convolutional'] = [255, 19, 19]
output_tensor_dims['094_convolutional'] = [255, 38, 38]
output_tensor_dims['106_convolutional'] = [255, 76, 76]

# Create a GraphBuilderONNX object with the known output tensor dimensions:
builder = GraphBuilderONNX(output_tensor_dims)

Thanks.

Hi, I have tested yolov3-tiny,but it‘s only 5FPS just。10W is turn on,it’s 2FPS without trt,and python3.6.9 is little faster than python2.7.17。I am confused now!

looking forward for your result,thanks.