How to Increase FPS on Jetson Xavier NX for Real-Time AI & Video Processing?

Hello,

I am using a Jetson Xavier NX for real-time Object detection and tracking usually with Yolov8. My current setup includes:

YOLOv8n model converted to TensorRT

Torch, TorchVision, and ONNX installed with CUDA support

OpenCV compiled with CUDA support

Despite this, I am only achieving 12–15 FPS.

I would like to know:

How to properly increase FPS on the Jetson Xavier NX for real-time AI inference and video streaming.

Which settings, optimizations, or modes can maximize performance for TensorRT models on this platform.

Any best practices, recommended configurations, or system-level tweaks specific to Jetson Xavier NX.

waiting for your help.

Did you convert the model to FP16?

Hi,

Could you check GPU utilization with tegrastats?

$ sudo tegrastats

It’s common that the pre-processing or post-processing, which run on CPU, takes long time to finish.
To solve this, it’s recommended to run YOLO with Deepstream:

Thanks.

Thank you.
yes I did:
/usr/src/tensorrt/bin/trtexec --onnx=model.onnx --saveEngine=model.engine --fp16
now it is:

Thank you!

Okay, I’ll use DeepStream.

I used this command:

$ sudo tegrastats

Are there other ways to increase Jetson’s speed or FPS?

Hi,

Do you run the inference at the same time?
The CPU and GPU loading shared above is less than 10% which is very low.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.