Randomness at inference time

When I am running detection on JETSON NANO with 288x192 input size the detection time are different for all detection mostly it gives 10FPS but sometimes detection time drops to 5FPS and it happens randomly for 10% of the time. Any way I can reduce this randomness?
This is only detection time not preprocessing time.
Can someone please help me with this?

The same thing has happened to me in the Jetson Xavier itself.
It is taking more time as well.

Hi,

Which inference framework do you use?
Are you using TensorRT?

Thanks.

yes using TensorRT only.

Hi,

Could you share a source and steps to reproduce with us?
We want to check this internally to get more information about the issue.

Thanks.

The code and library is basically this one.

Hi,

In case you don’t notice this, have you maximized the device performance first?

$ sudo nvpmodel -m
$ sudo jetson_clocks

More, although there is no pre-processing in the loop, there are some OpenCV APIs for the camera and display.
To clarify the latency comes from, would you mind updating the source with TensorRT-only to see if this issue still occurs?

For example:

if cv2.getWindowProperty(WINDOW_NAME, 0) < 0:
    break
img = cam.read()

tic = time.time()
if img is None:
    break

while True:   # loop with same image
    boxes, confs, clss = trt_yolo.detect(img, conf_th)
    toc = time.time()
    curr_fps = 1.0 / (toc - tic)
    # calculate an exponentially decaying average of fps number
    fps = curr_fps if fps == 0.0 else (fps*0.95 + curr_fps*0.05)
    tic = toc

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.