I have recently followed MailRocketSystems GitHub - mailrocketsystems/JetsonYolov5 repository to set up my Jetson nano for yolov5. It worked for the provided yolo models and converted to tensorRT, but his tutorial for custom models did not work for me. I want to train a custom model in google colab and use it on my jetson nano. To do this, I would install a clean jetpack 4.6 on nano and train a model in colab. Unfortunately, colab uses modern dependencies so it is incompatible. To fix this, research says I would need to convert my PyTorch model to ONNX in —opset 12, then convert to TensorRT on nano. However, colab can’t export in opset 12, so I would have to move the .pt to and environment on my pc and export there with dependencies that can export in opset 12. Could I have help setting up a nano environment for tensorRT and a local environment for exporting to ONNX. ChatGPT informed a lot of this research so I don’t even know if this is a possible workflow. Can the nano reliably convert ONNX to tensorRT? what packages do I need to run inference with tensorRT on nano? Can I move a PyTorch file between different environments and export it to onnx the same? What is the ideal set up to train a model in yolov5, move it to nano, and run tensorRT inference?
You can use the official Ultralytics YOLOv5 repo
and use the JetPack 4 docker to run and export the model:
Will a docker designed for yolov11 work to export a yolov5.pt model from a different environment? How would I use this docker to export the model; would it be exported to ONNX then to tensorRT or would I try to follow the linked yolov11 tutorial you gave me and export directly from PyTorch to TensorRT like it says on the website?
Hi,
In general, you can train it on a desktop environment, copy the model to Jetson, and convert it into TensorRT.
Both PyTorch (.pt) and ONNX (.onnx) models are portable.
But recommended to convert the model to ONNX first, so you don’t need to set PyTorch on Jetson as well.
Thanks.
Okay, so if I convert to ONNX on the jetson nano it should work. I just want to know what dependencies I need to match. Do I need to match the yolo commit I use in the docker on nano for export to the commit I use to train on colab?
Can someone show me an example workflow of what this looks like in commands? I’m pretty new so It helps to visualize it.
Hi,
For YOLO, you can check Ultralytics team’s document for more info:
Thanks.
The docker comes with the difficult dependencies installed for JetPack 4. So you can run YOLOv5 normally inside. And install any missing dependencies manually.
The Docker method did work and I am able to convert models to TensorRT trained in colab. However, the docker does not have gstreamer support for Opencv and I want to use my CSI cameras. How can I use the CSI cameras without breaking the docker? Also, I want to display the live video feed with inference on the NANO, and I am told that using X11 and the GUI is low performance inside Dockers, so what can I do?
What was error with Docker?
TensorRT converted in Colab will not work in Jetson Nano.
JetPack 4 is too old to work easily without Docker.
The docker is working and my model pipeline is complete. I can train, export, and run inference. I am now struggling to use my CSI cameras and my HDMI display inside the Docker because I want to run inference with my model on a live video feed and then display it. The docker comes with Headless-opencv so I can’t use opencv to do this unless I want to rebuild openCV which I perceive as risky in this fragile docker environment. Therefore, using cv2.VideoCapture or ImShow does not work, returning an error saying I need to rebuild. How can I capture a video feed and display it in Docker? Also, will it be slower to display a feed inside a docker and bottleneck my display FPS?
Did you try installing OpenCV with pip?
pip uninstall opencv-python-headless -y
pip install opencv-python
For GUI, you need to do the following while starting the container:
Ultralytics AI told me not to pip install opencv-python because it could break dependencies, but I will try it. However, just pip installing it wont give me GSTREAMER support it will give me GUI support though letting me use cv2.imshow etc etc… To use the CSI cameras, how can I make a gstreamer pipeline in python without using Opencv (since it wont support gstreamer), and be able to extract frames from the feed to display with Imshow and also run inference on? (I am away from my nano at the moment so I can test all of this in more detail when I get home, but I would like an idea of how I should go about this)
You can try this package
Thank you for the suggestion, but this is essentially a module built around opencv so it still wouldn’t work with csi cameras unless the opencv dependency had gstreamer built in, which it doesn’t. I’m going to manually create a gstreamer pipeline in python and find a way to extract frames from it instead avoiding opencv for camera capture.
EDIT: I solved this by rebuilding Opencv with all of the cmake tags the same except adding GTK support and GSTREAMER support. I just had to make sure to start the container giving it access to argus and X11 and I could run Opencv imshow and videocapture methods.