Hi,
Thank you for example [url]https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#fasterrcnn_sample[/url], I compiled it and run successfully on V100 GPU.
The question is how to estimate inference speed? I’m aware of this [url]https://devtalk.nvidia.com/default/topic/1029920/?comment=5239295[/url] topic, and I’m ready to make a queue from 100 images, to see the peak results.
Additional question, to run this model on bigger images I should train the model from scratch and import it defining fixed INPUT_H, should I?