TAO .etlt to TensorRT Engine Conversion on Jetson Orin / WSL2 / Docker Failed

I’m having several issues converting a pre-trained NVIDIA TAO .etlt model (Nvidia EmotionNet) to a TensorRT .engine optimized for Jetson Orin Nano Super (TensorRT 10.3, CUDA 12.6, cuDNN 9.x) **without dependency on TAO containers bound to TensorRT 8.x., specifically Nvidia’s EmotionNet model (EmotionNet | NVIDIA NGC).

Platform and Environment

a. Hardware

  • Local Laptop: Windows 11 Pro + NVIDIA RTX 2000 Ada (8GB VRAM)
  • Jetson Orin Nano Super (target deployment, with JetPack 6.0)

b. Software - W11

  • Host: Windows 11 Pro (22H2), WSL2 (Ubuntu 22.04.5 LTS, Kernel 5.10.16)
  • Docker: Multiple attempts to install and run Docker both natively in WSL2 and via Docker Desktop (failed due to access restrictions, service failures, iptables issues, and incompatibility with systemd)
  • CUDA: CUDA 12.6 (and also legacy 11.5 present)
  • cuDNN: 9.3.0.75 (JetPack 6, for Orin compatibility)
  • TensorRT: 10.3.0.30 (manually installed via pip and local debs/wheels; verified working)
  • TAO Toolkit: Tried via NGC containers (nvcr.io/nvidia/tao/tao-toolkit:5.5.0-deploy), but these are bundled with TensorRT 8.x.
  • TAO CLI and Converter: Attempted native and container install; TAO Converter for TensorRT 10 does not exist as a standalone binary or pip package as of July 2025.

c. Software - Jetson Orin Nano Super

  • Platform: aarch64
  • OS: Ubuntu 22.04 Jammy Jellysifh
  • Release: 5.15.148-tegra
  • Python version: 3.10.12
  • CUDA: 12.6.68
  • CUDA: Arch BIN 8.7
  • L4T: 36.4.4
  • cuDNN: 9.3.0.30
  • VPI: 3.2.4
  • Vulkan: 1.3.204
  • TensorRT: 10.3.0.30
  • deepstream-app: version 7.1.0
  • DeepStreamSDK: 7.1.0
  • CUDA Driver Version: 12.6
  • CUDA Runtime Version: 12.6

Main Steps and Issues

a. Docker Environment

  • Docker on Windows/WSL2 was unstable due to service issues, lack of systemd, and persistent iptables/nat errors (failed to register "bridge" driver).
  • Docker Desktop was unusable (Backend API/Access Denied errors).
  • Running Docker directly in WSL2 partially succeeded for some containers but always failed for TAO deployment containers (missing runtime, wrong TensorRT version).

b. TAO Deploy Container

  • Pulled and ran nvcr.io/nvidia/tao/tao-toolkit:5.5.0-deploy.
  • The container only supports TensorRT 8.5 (not compatible with JetPack 6 or Orin Nano’s TensorRT 10.3).
  • Conversion commands like classification_tf1 gen_trt_engine ... ran but generated errors:
    • Invalid nonce size (0) for CTR (when experiment spec not set/empty)
    • Message type "Experiment" has no field named "model" (experiment spec incompatibility)
    • Workaround: left experiment spec file empty to proceed

c. Model and Format Challenges

  • EmotionNet .etlt is a protected/serialized format.
  • No direct support in TAO 5.5/5.2 containers for export to engine targeting TensorRT 10.x
  • tao-converter as a Python or binary package for TensorRT 10.x does NOT exist as of this writing.

d. Manual TensorRT 10 Conversion

  • Installed latest tensorrt, tensorrt-cu12, tensorrt-cu12-bindings, and tensorrt-cu12-libs via pip (from NGC and PyPI) in WSL2.
  • Could import TensorRT 10.3 Python API and run code, but:
    • Cannot convert .etlt to .engine without the official TAO Converter or access to decrypted ONNX/UFF.
    • No official workflow exists to export TAO-protected .etlt models to engine using native Python and TensorRT 10.x bindings.

e. Attempts with ONNX

  • Investigated if .etlt could be extracted to ONNX and converted natively with TensorRT 10, but .etlt is encrypted and TAO-specific.

Findings and Outstanding Gaps

  • TAO Deploy containers do not support TensorRT 10.x, only 8.x.
  • There is NO TAO Converter for TensorRT 10.x (neither as pip nor as NGC binary release).
  • Engines generated with TensorRT 8.x are NOT compatible/portable to JetPack 6 (Orin) with TensorRT 10.x.
  • Installing TAO Python packages natively does not provide a .etlt.engine export path, as the converter backend is missing.
  • Attempted to convert in both Docker and native (WSL2) with the same limitations.
  • All official documentation and NGC model cards confirm only TensorRT 8.x support for export as of July 2025.
  • Current workflow is a dead-end for Jetson Orin + TensorRT 10.x if the original .etlt/.onnx cannot be converted on the device.
  • I can’t downgrade the version of TensorRT, CUDA, Jetpack, and cuDNN because the rest of the solution is working fine

What am I looking for?

  1. Provide a TAO Converter (standalone or containerized) for TensorRT 10.x and CUDA 12.6+ that can run natively on Orin or compatible x86_64 systems.
  2. Alternatively, document a method to decrypt .etlt and extract .onnx or direct TensorRT engine export on JetPack 6.
  3. Clarify if future TAO releases will allow engine export for TensorRT 10.x, or if there is an internal workflow for this use case
  4. Await guidance from NVIDIA for official .etlt to .engine conversion for Jetson Orin Nano (TensorRT 10.x). It’s very urgent, please
  5. Evaluate retraining/exporting models directly on Orin hardware if the toolchain can be made available.
  6. Track future TAO Toolkit and TensorRT container releases.

I’ve been trying to do this conversion for 3 days, and I really don’t want to waste any more time. If it’s not possible, I’ll have to move on to using TensorFlow or PyTorch models, which I don’t want to do since sometimes compatibility with CUDA isn’t that easy.

You can convert the .etlt to .onnx file via the guide from tao_toolkit_recipes/tao_forum_faq/FAQ.md at main · NVIDIA-AI-IOT/tao_toolkit_recipes · GitHub.
Please use Netron to double check the onnx file.

Thank you, Morgan. I tried several times, but the only issue was that the library “from nvidia_tao_tf1.encoding import encoding”
It is not found in the Docker image. I tried installing it, but I’m still experiencing the same issue.

Currently, I’m using the Latest of the TAO images on my laptop( TAO Toolkit | NVIDIA NGC):

  1. nvcr.io/nvidia/tao/tao-toolkit:6.0.0-deploy
  2. nvcr.io/nvidia/tao/tao-toolkit:6.0.0-pyt
  3. nvcr.io/nvidia/tao/tao-toolkit:6.0.0-tf2

And I using the latest image of L4T TensorRT on the Jetson Orin to convert “onnx” to “engine” due by the TensorRT’s version on the Jetson Orin NVIDIA L4T TensorRT | NVIDIA NGC.

  1. nvcr.io/nvidia/l4t-tensorrt:r10.3.0-devel

Additionally, I executed the command “git clone GitHub - NVIDIA/tao_tutorials: Quick start scripts and tutorial notebooks to get started with TAO Toolkit” to use the TAO tutorials. In this moment, I don’t know what else to do.

I need to use TensorRT, but the documentation is either insufficient or incomplete, making it difficult to export ETLT models to ONNX or an engine. The problem that is currently occurring is with the parameters file that uses the Classification_pyt or Classification_tf2 requires (latest version) tasks.

Heartily, if I can’t solve this issue as soon as possible, I think I need to switch to TensorFlow or PyTorch. It has taken me more than a week to try this export without success.

It was essential to use the default frameworks on the Jetson Orin to avoid overloading the device; however, I see that this is very complex to do so at this time.

Tks.

Could you please use nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5 as mentioned in the recipe? Thanks.

1 Like

More, the (nvidia-tao-deploy · PyPI) (nvidia-tao-deploy · PyPI) is available now. You can install in Jetson.
Please $ pip install nvidia-tao-deploy==6.0.0

Hi Morganh, thanks a lot for your collaboration. Unfortunately, it doesn’t work. I got this error when trying to do the installation on the Jetson Orin Nano super device:

ERROR: Ignored the following versions that require a different Python version: 0.6.2 Requires-Python ==3.7.; 0.6.4 Requires-Python ==3.7.; 0.6.6 Requires-Python ==3.8.; 5.0.0.390.dev0 Requires-Python ==3.8.; 5.0.0.418.dev0 Requires-Python ==3.8.; 5.0.0.423.dev0 Requires-Python ==3.8. ERROR: Could not find a version that satisfies the requirement nvidia-eff==0.6.6 (from nvidia-tao-deploy) (from versions: 0.0.1.dev4, 0.0.1.dev5) ERROR: No matching distribution found for nvidia-eff==0.6.6

Can you share the full log and commands?

This worked very well, thank you. I’ve already managed to convert the model to ONNX in WSL (nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5 image) and then to Engine in the Jetson Orin (trtexec --onnx=model.onnx --saveEngine=model.engine --fp16).

I appreciate your help.

1 Like