Hello everyone.
I am using opencv (4.5.4) to inference on a pre-trained model. In order to use the GPU of my jetson nano, the following commands are present in my code:
net.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA)
net.setPreferableTarget (cv2.dnn.DNN_TARGET_CUDA)
In addition to these two commands, I also use:
sudo nvpmodel -m 0
sudo jetson_clocks
From a performance point of view, I can’t achieve the same performance achieved by SegNet inference (in terms of fps).
I think TensorRT is not being taken advantage of by the commands I just reported, but I could be wrong.
Do you have any suggestions for speeding up inference in opencv?
Thanks.