trt-yolo-app performance on tx2

Hello ,

I am working on with the trt-yolo-app . Inference time on tx2 with yolov2 -416 is aproximately 57 ms . It is fast but after this inference as far as i understand i need to call

inferNet->decodeDetections(imageIdx, curImage.getImageHeight(),curImage.getImageWidth()); functions to get results. It is changing between 11ms to 18ms . Are there any way to reduce duration of decoding? Does it necessary . I don’t know the logic behind this one.

My average speed for processing image is 80 ms . Is it the limit do you think?

I have overclocked the tx2 with nvpmodel -m 0 and jetson_clocks.

Hi,

The script doesn’t overclock the TX2. Just fix it to the maximal.

The function direct to this CPU decoder:
[url]deepstream_reference_apps/yolov2.cpp at 3a8957b2d985d7fc2498a0f070832eb145e809ca · NVIDIA-AI-IOT/deepstream_reference_apps · GitHub
You can implement it as a GPU kernel or apply some optimization for acceleration.

Thanks.