–>Hi, I am using jetson orin nano super … but when i downloaded my custom pytorch model with various ways like gdown , wget or copied via pendrive …its always compressed . I cant load that model to export in engine file… Can you please guide me ?
–>also I tried to work with onnxruntime (GPU) but that was very slow (20 FPS)… on docker.
–>When i tried to convert the onnx model into engine using -
/usr/src/tensorrt/bin/trtexec --onnx=yolo11s.onnx --saveEngine=yolo11s.engine --fp16
(AttributeError: ‘NoneType’ object has no attribute ‘get’)
→ Hence, i checked the engine file using
/usr/src/tensorrt/bin/trtexec --loadEngine=yolo11s.engine --fp16 --shapes=images:1x3x640x640
and it was successful but i cant predict it on my video .
No , but i have tried this code
usr/src/tensorrt/bin/trtexec --loadEngine=yolo11s_haygriv1.engine --fp16 --shapes=images:1x3x640x640
and this is the output:
i just want to load this engine file for RTSP camera input feed and see its detection in real time.
I am using docker container (Ultralytics YOLOv8 - NVIDIA Jetson AI Lab) .
Can you just provide the code to run my custom model on jetson orin nano ??