How to calculate TensorRT fasterRCNN performance speed?

Hi,
Thank you for example https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#fasterrcnn_sample, I compiled it and run successfully on V100 GPU.
The question is how to estimate inference speed? I’m aware of this https://devtalk.nvidia.com/default/topic/1029920/?comment=5239295 topic, and I’m ready to make a queue from 100 images, to see the peak results.

Additional question, to run this model on bigger images I should train the model from scratch and import it defining fixed INPUT_H, should I?

We created a new “Deep Learning Training and Inference” section in Devtalk to improve the experience for deep learning and accelerated computing, and HPC users:
https://devtalk.nvidia.com/default/board/301/deep-learning-training-and-inference-/

We are moving active deep learning threads to the new section.

URLs for topics will not change with the re-categorization. So your bookmarks and links will continue to work as earlier.

-Siddharth