ESS Model on CPU?

We’ve been using the ESS most on a Jetson Orin for robot navigation and it’s been great. We also have a usecase where we need to use this model without a GPU. Is it possible to run the model without TensorRT? I noticed it’s using 4 custom layers which are supplied in pre built binaries for TensorRT only as a plug-in. Is there a similar plug-in for onnxruntime?

1 Like

Hi,
Here are some suggestions for the common issues:

1. Performance

Please run the below command before benchmarking deep learning use case:

$ sudo nvpmodel -m 0
$ sudo jetson_clocks

2. Installation

Installation guide of deep learning frameworks on Jetson:

3. Tutorial

Startup deep learning tutorial:

4. Report issue

If these suggestions don’t help and you want to report an issue to us, please attach the model, command/step, and the customized app (if any) with us to reproduce locally.

Thanks!

Hi,

If your model can run with PyTorch or other frameworks with CPU mode, you can deploy it on Jetson without using GPU.
For the plugin problem, you can check if the layers are listed in the ONNXRuntime document below:

Thanks.

ESS is an Nvidia supplied model from the NGC ESS DNN Stereo Disparity | NVIDIA NGC

The older versions (pre 4.0) used to be convertable to ONNX easily and could be run anywhere. With 4.0 some fused operations were created and added which can only run on TensorRT. If code or ONNX plugins for those operations could be provided that would be very helpful.

Hi,

TensorRT is supported on the Jetson Orin.
Based on the “Performance” section below, the model should be able to run on AGX Orin:

Does using TensorRT work for you?

Thanks.