Jetson Orin Nx GPU not Consumed while running YOLOv8 modelo

Hi, I am trying to run the YOLOv8 model for detection of bed on the Jetson Orin NX. But Orin only using CPU alone and GPU usage is 0.0%. In my logic added torch library to select the GPU device. My model is also consuming 100% of six thread on it.

For ultralytics Installation, YOLOv8 Installation procedure is followed and verified the Installed package.

model = YOLO("<model_file>")

model.predict(image, imgsz=(480,640), classes=0, verbose=True)

How to change the Orin NX to consume the GPU alone for the Detection Model?

Thanks in advance.

Hi,

Ultralytics by default run the model with GPU (through TensorRT)
Do you infer the model with the script like the below sample?

Thanks.

Thanks for your response.

After converting the PyToch (.pt) model to TensorRT (.engine) model.

Then, I am able to use the model in the Jetson orin NX 8GB with GPU but it has consuming lot amount of RAM. i.e. For the Starting and Ideal state of Jetson orin took ~ 2.0 GB, but after running the detection model uses 2 GB of RAM by total 4GB of RAM is consumed.

Is there any way to reduce it.

Thanks in Advance…

Hi,

Which JetPack version do you use?

From CUDA 11.8, we introduce a new feature called lazy loading that can reduce memory usage.
With lazy loading, a CUDA app (ex. TensorRT) doesn’t need to load the whole CUDA library at the beginning.
Just load it once needed.

Thanks.

Hi,

Jetpack version is 5.1.4 with the ubuntu 20.04 and Cuda version is about 11.4 in it.

Is there any possibility for using it?

for ‘jetson_release’ command shows the Jetpack Missing error.

Thanks for your response.

Hi,

Are you able to reflash the system to JetPack 6?
JetPack 6 has CUDA 12 and TensorRT 8.6.

Thanks.

Hi,
That’s not possible in my case, because I have using Ubuntu 20.04 for ROS Noetic version.

Is there any possibility to reduce the usage of ram in it?

Thanks.

Hi,

Could you build the TensorRT engine without cuBLAS and cuDNN to see if it helps?
Please find the info below:

Thanks.

Hi,

Yes, I will try for it and check with Jetson Orin.

Thanks.

Is this still an issue to support? Any result can be shared?

Hi,
Yes, I still need support.

Because, I am not building the model. I have converted the PyTorch model to the Onnx and TensorRt model. Can i get any possibility to remove the cuBLAS and cuDNN while conversion.

Thanks in Advance.