Estimating Depth with ONNX Models and Custom Layers Using NVIDIA TensorRT

Originally published at: https://developer.nvidia.com/blog/estimating-depth-beyond-2d-using-custom-layers-on-tensorrt-and-onnx-models/

TensorRT is an SDK for high performance, deep learning inference. It includes a deep learning inference optimizer and a runtime that delivers low latency and high throughput for deep learning applications. TensorRT uses the ONNX format as an intermediate representation for converting models from major frameworks such as TensorFlow and PyTorch. In this post, you…

We have released a sample which demonstrates converting a Pytorch model into ONNX layers, transforming ONNX graphs using new ONNX-graphsurgeon API, implement plugins and execute using TensorRT. We hope this will be useful to accelerate your models with TensorRT. If you have any questions, let us know in comments.