TRT5: sample code to measure the TensorRT inference performance

Hi all, could you please provide a sample code to calculate the tensorrt engine inference performance as shown in this link?: https://developer.nvidia.com/deep-learning-performance-training-inference

I am using TRT5 with the python sample uff_resnet50.py and here are the metrics I need to extract:
Throughput: images Per Second
Latency: Latency in Milliseconds
Power Efficiency: Inferences Per Second Per Watt

Thanks,

Vilmara

Hello,

This is the basis of the sample code used in these models
https://developer.nvidia.com/deep-learning-examples

Thanks for your reply, is there a specific sample that shows how to calculate the tensorrt inference performance as shown in this link https://developer.nvidia.com/deep-learning-performance-training-inference (Throughput images Per Second
Latency in Milliseconds, Power Efficiency of Inferences Per Second Per Watt).

I am using TensorFlow. I have seen the code sample you sent me but the examples mainly show how to extract the performance using tensorflow or with integration TF-TRT; I need something more specific that helps me and shows me how to extract the performance using direct tensorrt.

Thanks

hello,

we are in touch with the author of the benchmark and see what/if any code can be shared. Please stayed tuned.

Thanks, I appreciate it