Running the TLT on a Jetson Nano

To preface, I have little experience with the Jetson Nano and Nvidia’s software, but I was given a task that requires the use of Nvidia’s TLT. I was following the directions of this guide https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html to setup the TLT on the Jetson Nano and when I went to enter this command:

docker run --runtime=nvidia -it -v /home/tyler/tlt-experiments:/workspace/tlt-experiments nvcr.io/nvidia/tlt-streamanalytics:2.0_py3 /bin/bash

the following error occurs:

standard_init_linux.go:211: exec user process caused “exec format error”

Research into the error leads me to the following thread of this forum: Docker run error - "exec format error"

Reading through the thread it seems like the TLT can’t be run on the Jetson Nano itself and relies on a separate host machine to train models on? Is there a way to make this work solely on the Jetson Nano? Any help regarding this problem is appreciated.

The docker should work on host PC.
After training, the etlt model or trt engine can be deployed into Jetson Nano.

The requirements in the TLT documentation Integrating TAO Models into DeepStream — TAO Toolkit 3.22.05 documentation suggest that the training be done on the Ubuntu OS. For Windows users, would using a virtual machine running Ubuntu be able to run the training portion of the TLT?

@LearningGuy
It is expected to work.