Can tlt launcher work without online access to NGC repository?

Hello,

I have an x64 Linux Ubuntu 18.04 machine which isn’t connected directly to the internet but via an air gap environment.

It means that I shall manually pull all required NGC images to a separate machine which connected to the internet and move it to my work machine.

Then I shall find the way to order the tlt launcher to search all required NGC images locally instead of the nvcr.io repository.

Is it possible?
If yes, please describe how I shall do it.

Thanks,

For you case, you can login the 3.0 docker directly and run the tasks inside it.

Firstly, in the machine which connected to the internet.
$ docker pull nvcr.io/nvidia/tlt-streamanalytics:v3.0-py3

Then copy the docker to the machine without online access.

Then login the docker
$ docker run --runtime=nvidia -it nvcr.io/nvidia/tlt-streamanalytics:v3.0-py3 /bin/bash

Run tasks, for example, detectnet_v2 training.
# detectnet_v2 train xxx

Thanks @Morganh,
Can you clarify please what is it 3.0 docker? did you mean tlt 3.0 version?

If you mean that I shall download the tlt launcher manually, I already manually downloaded the tlt launcher *.whl file from here:
tlt launcher *.whl files

I did it in my connected machine and move it to the unconnected machine, then I successfully installed it using the pip3 install command.

It seems that basic tlt launcher commands are working well, but when I’m trying to activate a commands which trying to connect to the ncvr.io repository they fails because i don’t have an access to the Internet.

I’m trying to work with the yolo_v4 pre trained model with the Resnet 18 as its backbone.

I manually downloaded the Resnet18 *.hdf5 file from here:
tlt Resnet18 backbone
and move it to the unconnected machine.

When I tried to perform the train command I got the following error:

Docker pull failed. 500 Server Error: Internal Server Error (“Get https://nvcr.io/v2: dial tcp: lookup nvcr.io: no such host”)

So, I understand that the train command (and maybe others such as evaluate, inference, etc…) are trying to pull additional NGC images.

So, I’m wondering if I can know which NGC images are required per pre trained model and its selected backbone in order to pull them manually in my connected machine and then move them to my unconnected machine and finally refer the tlt launcher to search them locally instead in the nvcr.io repository.

Is it possible?

Thanks

Yes, it is TLT 3.0 docker, i.e , nvcr.io/nvidia/tlt-streamanalytics:v3.0-py3 .

As mentioned above, you can ignore tlt-launcher. Just need to pull the docker directly and copy it to the machine without online access.
$ docker pull nvcr.io/nvidia/tlt-streamanalytics:v3.0-py3

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.