When I am running detection on JETSON NANO with 288x192 input size the detection time are different for all detection mostly it gives 10FPS but sometimes detection time drops to 5FPS and it happens randomly for 10% of the time. Any way I can reduce this randomness?
This is only detection time not preprocessing time.
Can someone please help me with this?
The same thing has happened to me in the Jetson Xavier itself.
It is taking more time as well.
In case you don’t notice this, have you maximized the device performance first?
$ sudo nvpmodel -m
$ sudo jetson_clocks
More, although there is no pre-processing in the loop, there are some OpenCV APIs for the camera and display.
To clarify the latency comes from, would you mind updating the source with TensorRT-only to see if this issue still occurs?
if cv2.getWindowProperty(WINDOW_NAME, 0) < 0:
img = cam.read()
tic = time.time()
if img is None:
while True: # loop with same image
boxes, confs, clss = trt_yolo.detect(img, conf_th)
toc = time.time()
curr_fps = 1.0 / (toc - tic)
# calculate an exponentially decaying average of fps number
fps = curr_fps if fps == 0.0 else (fps*0.95 + curr_fps*0.05)
tic = toc