Yolo model not working properly on jetson orin nano

–>Hi, I am using jetson orin nano super … but when i downloaded my custom pytorch model with various ways like gdown , wget or copied via pendrive …its always compressed . I cant load that model to export in engine file… Can you please guide me ?

–>also I tried to work with onnxruntime (GPU) but that was very slow (20 FPS)… on docker.

–>When i tried to convert the onnx model into engine using -
/usr/src/tensorrt/bin/trtexec --onnx=yolo11s.onnx --saveEngine=yolo11s.engine --fp16

and then when i used this command :

yolo predict model=yolo11s.engine source=/ultralytics/video1.mp4

there was an error -

(AttributeError: ‘NoneType’ object has no attribute ‘get’)

→ Hence, i checked the engine file using
/usr/src/tensorrt/bin/trtexec --loadEngine=yolo11s.engine --fp16 --shapes=images:1x3x640x640
and it was successful but i cant predict it on my video .

Hi,

Have you tried the Ultralytics source?
For example:

Thanks.

No , but i have tried this code
usr/src/tensorrt/bin/trtexec --loadEngine=yolo11s_haygriv1.engine --fp16 --shapes=images:1x3x640x640
and this is the output:


i just want to load this engine file for RTSP camera input feed and see its detection in real time.
I am using docker container (Ultralytics YOLOv8 - NVIDIA Jetson AI Lab) .
Can you just provide the code to run my custom model on jetson orin nano ??

Why don’t you just export to TensorRT using Ultralytics?

1 Like

Hi,

The container you used is used Ultralytics.
They have add RTSP camera support already. Please check the document below:

stream: ‘rtsp://example.com/media.mp4’`

As @Y-T-G suggested, you can use TensorRT engine as backend to get the same performance as using TensorRT.
Thanks.

2 Likes