Hello,
- I have a JETSON ORIN NANO Developer Kit 8GB and my goal is to implement Real Time Yolo Object Detection, with rtsp source from an IP Camera.
- I have exported my custom trained yolov11 model to TensorRT format and I am using the ultralytics library and yolo predict cfg cli command to run it, and there is no problem with the inference. However, using the ‘free’ command, I have discovered that the used RAM it’s constantly increasing (while the free one it’s decreasing), causing the board to crash.
- I have tried using Stream=True parameter and the free RAM it’s still decreasing, but only slower. The program should run non-stop and save the inference results, therefore I need a long-term solution.
- As for my current environment, I am using:
- Jetpack 6.1 with cuda 12.6, cuda driver version 540.4.0, libcudnn 9.5
- TensorRT 10.3.0
- Torch 2.5.0
- Torch 0.20.0
- Ultralytics 8.3.64