I am using opencv (4.5.4) to inference on a pre-trained model. In order to use the GPU of my jetson nano, the following commands are present in my code:
In addition to these two commands, I also use:
sudo nvpmodel -m 0
From a performance point of view, I can’t achieve the same performance achieved by SegNet inference (in terms of fps).
I think TensorRT is not being taken advantage of by the commands I just reported, but I could be wrong.
Do you have any suggestions for speeding up inference in opencv?