Optimise Yolo V3 in pytorch through TensorRT

I want to optimise code in the Yolo V3 https://github.com/ayooshkathuria/pytorch-yolo-v3/blob/master/darknet.py implementation in pytorch. Currently, its speed is 3fps.

Found this Pytorch Model to tensorrt converter code https://github.com/modricwang/Pytorch-Model-to-TensorRT/blob/master/main.py

import tensorrt


Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'tensorrt'
import pycuda


Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'pycuda'

According to the SDK https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html,tensorrt is pre installed. I am confused.

Is this the correct way to use TensorRT?
New to this, would appreciate some guidance here

I found this Hello world example https://docs.nvidia.com/deeplearning/sdk/tensorrt-sample-support-guide/index.html#network_api_pytorch_mnist

Could find


trying to install requirements, i got his error

ERROR: torch-0.4.1-cp36-cp36m-linux_x86_64.whl is not a supported wheel on this platform.


You will need to build PyTorch from source for the ARM support.
Here are the building steps and prebuilt package for your reference:

For yolo, we have a sample for it with onnx format.
It’s recommended to check it first: /usr/src/tensorrt/samples/python/yolov3_onnx/


Thank You

Just saw it.

Have posted an issue for yolov3_onnx also https://devtalk.nvidia.com/default/topic/1052153/jetson-nano/tensorrt-backend-for-onnx-on-jetson-nano/2/?offset=23#

If the model is trained using PyTorch on another machine and then converted to trt, would you still need to use the version of PyTorch for the Jetson nano during training?