Inference Benchmarks - TensorRT Version ?

https://developer.nvidia.com/deep-learning-performance-training-inference#resnet50-throughput

Hi,

The above link contains the inference latency and throughput numbers, can you please provide the TensorRT version used while recording these metrics.

Can you please add meta data(Tools versions etc. ) along with silicon and model details to these published numbers , it would be great help.

Cheers !

Hello, this page may contain more of the details you are looking for.