Estimating Depth with ONNX Models and Custom Layers Using NVIDIA TensorRT

Originally published at:

TensorRT is an SDK for high performance, deep learning inference. It includes a deep learning inference optimizer and a runtime that delivers low latency and high throughput for deep learning applications. TensorRT uses the ONNX format as an intermediate representation for converting models from major frameworks such as TensorFlow and PyTorch. In this post, you…

We have released a sample which demonstrates converting a Pytorch model into ONNX layers, transforming ONNX graphs using new ONNX-graphsurgeon API, implement plugins and execute using TensorRT. We hope this will be useful to accelerate your models with TensorRT. If you have any questions, let us know in comments.

1 Like

hello, when I try to customize the plug-in and use Python to convert onnx to trt, I ran into some problems. I can’t find a reference example. Onnx_packnet is introduced in the developer guide, but I can’t find the complete content from this example.