Thanks @Morganh,
Can you clarify please what is it 3.0 docker? did you mean tlt 3.0 version?
If you mean that I shall download the tlt launcher manually, I already manually downloaded the tlt launcher *.whl file from here: tlt launcher *.whl files
I did it in my connected machine and move it to the unconnected machine, then I successfully installed it using the pip3 install command.
It seems that basic tlt launcher commands are working well, but when I’m trying to activate a commands which trying to connect to the ncvr.io repository they fails because i don’t have an access to the Internet.
I’m trying to work with the yolo_v4 pre trained model with the Resnet 18 as its backbone.
I manually downloaded the Resnet18 *.hdf5 file from here: tlt Resnet18 backbone
and move it to the unconnected machine.
When I tried to perform the train command I got the following error:
Docker pull failed. 500 Server Error: Internal Server Error (“Get https://nvcr.io/v2: dial tcp: lookup nvcr.io: no such host”)
So, I understand that the train command (and maybe others such as evaluate, inference, etc…) are trying to pull additional NGC images.
So, I’m wondering if I can know which NGC images are required per pre trained model and its selected backbone in order to pull them manually in my connected machine and then move them to my unconnected machine and finally refer the tlt launcher to search them locally instead in the nvcr.io repository.
As mentioned above, you can ignore tlt-launcher. Just need to pull the docker directly and copy it to the machine without online access.
$ docker pull nvcr.io/nvidia/tlt-streamanalytics:v3.0-py3