Estimating Depth with ONNX Models and Custom Layers Using NVIDIA TensorRT

Originally published at: https://developer.nvidia.com/blog/estimating-depth-beyond-2d-using-custom-layers-on-tensorrt-and-onnx-models/

TensorRT is an SDK for high performance, deep learning inference. It includes a deep learning inference optimizer and a runtime that delivers low latency and high throughput for deep learning applications. TensorRT uses the ONNX format as an intermediate representation for converting models from major frameworks such as TensorFlow and PyTorch. In this post, you…

We have released a sample which demonstrates converting a Pytorch model into ONNX layers, transforming ONNX graphs using new ONNX-graphsurgeon API, implement plugins and execute using TensorRT. We hope this will be useful to accelerate your models with TensorRT. If you have any questions, let us know in comments.

1 Like

hello, when I try to customize the plug-in and use Python to convert onnx to trt, I ran into some problems. I can’t find a reference example. Onnx_packnet is introduced in the developer guide, but I can’t find the complete content from this example.

For the packnet model on TensorRT,
Did you evaluate it over jetson nano devices?
What was the FPS acheived?

@ehrichwen You might have already found the sample but here is the link anyways TensorRT/samples/python/onnx_packnet at main · NVIDIA/TensorRT · GitHub

@shanbhagdhiraj We didn’t evaluate it on jetson devices (so no FPS numbers), but feel free to try it out. We don’t anticipate any issues building it on Jetson.

@peri.dheeraj Thanks for the reply,
Do you expect it to run at real time over jetson nano.