Speeding Up Deep Learning Inference Using TensorRT

Originally published at: https://developer.nvidia.com/blog/speeding-up-deep-learning-inference-using-tensorrt/

Looking for more? Check out the hands-on DLI training course: Optimization and Deployment of TensorFlow Models with TensorRT This is an updated version of How to Speed Up Deep Learning Inference Using TensorRT. This version starts from a PyTorch model instead of the ONNX model, upgrades the sample application to use TensorRT 7, and replaces…

This line:

>> tar xvf speeding-up-unet.7z # Unpack the model data into the unet folder

is out of date. It’ll throw an error “this does not look like a tar archive”.

This fix is:

apt update
apt install p7zip-full
7z x speeding-up-unet.7z

The article or the file should be updated, if possible