I am testing objectDetector_Yolo from deepstream-6.0/sources/, my goal is to test the model latency using different input sizes for the trt engine, as expected the latency increases if I reduce the input image size, however I noticed that this is not the case for the engine file.
Below you’ll find the input sizes I have tried and the engine file I got from TensorRT.
|Input Size||Engine file size (MB)||Deepstream FPS||gst-launch FPS|
Is there any reason why the engine file decreases and then increases again ?
Does the engine file size has an impact on the % of GPU that will be used by the application ?
TensorRT Version: 126.96.36.199
GPU Type: Jetson TX2NX
Jetpack: 4.62.2 [L4T 32.7.2]
CUDA Version: 10.2
CUDNN Version: 188.8.131.52
Operating System + Version: Ubuntu 18.04
Baremetal or Container (if container which image + tag): Baremetal
After following the instructions from the README file in /opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_FasterRCNN/ I change the width in height in yolov3.cfg with the values from the above table and finally, I run:
$ deepstream-app -c deepstream_app_config_yoloV3.txt