I’m running a YOLOX-S model converted to a TensorRT engine on a Jetson Orin Nano (Super) with JetPack 6.2. The model was trained and exported to a TensorRT engine using the provided tools. I’m using Python and OpenCV to run inference, and I’m getting around 25 FPS.
I’ve tested this with the device set to 15W power mode and jetson_clocks enabled.
Here’s the code snippet I used to calculate the FPS:
start_time = time.time() # Start time of video processing to calculate fps
while True:
ret, frame = input_video.read()
if not ret:
break
frame_count+=1
results, img_info = predictor.inference(frame)
ratio = img_info["ratio"]
end_time = time.time()
avg_fps = frame_count / (end_time - start_time)
print(f"FPS = {avg_fps}")
I couldn’t find any official benchmarks for YOLOX models on Jetson devices. So I’m wondering if ~25 FPS is the expected performance for YOLOX-S (640x640, FP16 TensorRT engine) on the Orin Nano?
Also the total power usage is less than 9W. Is this normal?
I’ve measured the average FPS for YOLOX (inference-only), and I’m getting around 35 FPS. I also tested YOLO11n in FP16 mode and observed inference times between 7 to 13ms, which looks reasonable.
During inference, the power consumption stays under 9W. But even under a combined CPU and GPU stress test in MAXN SUPER mode, it only peaks at around 13W. I’m using a 45W adapter, so there’s still plenty of headroom. I also came across discussions suggesting that enabling jetson_clocks isn’t recommended for power modes above 15W.
So, I’m curious, under what conditions does the 25W or MAXN SUPER mode actually come into play? And when would it be beneficial to enable jetson_clocks?