Problem with tlt file mounting

tensorrt-8.0.1.6
Quadro RTX 5000 dual GPU
Driver Version: 470.82.00
CUDA Version: 11.4
Ubuntu 18.04
python 3.6
Yolo_v4

nvidia/tao/tao-toolkit-tf:
docker_registry: nvcr.io
docker_tag: v3.21.08-py3

Current dependency
Ubuntu 18.04 LTS
18.04
python

=3.6.9
docker-ce
19.03.5
docker-API
1.40
nvidia-container-toolkit
1.3.0-1
nvidia-docker2
2.5.0-1
nvidia-driver
455
python-pip
21.06

I have earlier trained my custom model using this TAO toolkit,but now i can’t mount my folder properly
I have upgraded my driver from 455 to 470 after that
and also upgraded tensort to 8
and cuda from 11.1 to 11.4 after that
all the dependencies are satisfied expect that i can’t figure out how to check the version of my current nvidia-container-runtime

i looked up forms and they asked to perform i get this

tlt yolo_v4 run ls /home/vaaan/tlt-experiments/yolo_v4/specs/yolo_v4_retrain_resnet18_kitti.txt
/home/vaaan/.local/lib/python3.6/site-packages/tlt/__init__.py:20:

DeprecationWarning:
The nvidia-tlt package will be deprecated soon. Going forward please migrate to using the nvidia-tao package.

warnings.warn(message, DeprecationWarning)
2021-12-20 17:05:15,879 [INFO] root: Registry: [‘nvcr.io’]
2021-12-20 17:05:15,958 [INFO] tlt.components.instance_handler.local_instance: Running command in container: nvcr.io/nvidia/tao/tao-toolkit-tf:v3.21.11-tf1.15.5-py3
/usr/lib/python3/dist-packages/apport/report.py:13: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module’s documentation for alternative uses
import fnmatch, glob, traceback, errno, sys, atexit, locale, imp, stat
Traceback (most recent call last):
File “/home/vaaan/.local/bin/tlt”, line 8, in
sys.exit(main())
File “/home/vaaan/.local/lib/python3.6/site-packages/tlt/entrypoint/entrypoint.py”, line 115, in main
args[1:]
File “/home/vaaan/.local/lib/python3.6/site-packages/tlt/components/instance_handler/local_instance.py”, line 319, in launch_command
docker_handler.run_container(command)
File “/home/vaaan/.local/lib/python3.6/site-packages/tlt/components/docker_handler/docker_handler.py”, line 284, in run_container
mount_data, env_vars, docker_options = self._get_mount_env_data()
File “/home/vaaan/.local/lib/python3.6/site-packages/tlt/components/docker_handler/docker_handler.py”, line 112, in _get_mount_env_data
raise ValueError(“Mount point source path doesn’t exist. {}”.format(mount[‘source’]))
ValueError: Mount point source path doesn’t exist. /home/vaaan/YOUR_LOCAL_PROJECT_DIR_PATH

I use these codes to launch docker :

wget -O ngccli_linux.zip https://ngc.nvidia.com/downloads/ngccli_linux.zip && unzip -o ngccli_linux.zip && chmod u+x ngc

md5sum -c ngc.md5

echo "export PATH=\"\$PATH:$(pwd)\"" >> ~/.bash_profile && source ~/.bash_profile

ngc config set

docker login nvcr.io

docker run --runtime=nvidia -it -v /home/vaaan/tlt-experiments:/workspace/tlt-experiments -p 8888:8888 nvcr.io/nvidia/tlt-streamanalytics:v3.0-py3 /bin/bash

jupyter notebook --ip 0.0.0.0 --port 8888 --allow-root 

Setting up env variables for cleaner command line commands.

import os

print(“Please replace the variable with your key.”)

%env KEY=aGJuM2dxZGltbjgwaGNnb3Fxc2h0ZXBqZGk6MzlkYjAxY2EtZWE2OC00NGRiLWI5ZmUtZWRlNDZjMTI4MjA5

%env USER_EXPERIMENT_DIR=/workspace/tlt-experiments/yolo_v4

%env DATA_DOWNLOAD_DIR=/workspace/tlt-experiments/data

Set this path if you don’t run the notebook from the samples directory.

#%env NOTEBOOK_ROOT=/home/vaaan/tlt_cv_samples_vv1.0.1/

Please define this local project directory that needs to be mapped to the TLT docker session.

The dataset expected to be present in $LOCAL_PROJECT_DIR/data, while the results for the steps

in this notebook will be stored at $LOCAL_PROJECT_DIR/detectnet_v2

%env LOCAL_PROJECT_DIR=/home/vaaan/tlt-experiments

os.environ[“LOCAL_DATA_DIR”] = os.path.join(os.getenv(“LOCAL_PROJECT_DIR”, os.getcwd()), “data”)

os.environ[“LOCAL_EXPERIMENT_DIR”] = os.path.join(os.getenv(“LOCAL_PROJECT_DIR”, os.getcwd()), “yolo_v4”)

The sample spec files are present in the same path as the downloaded samples.

os.environ[“LOCAL_SPECS_DIR”] = os.path.join(

os.getenv(“NOTEBOOK_ROOT”, os.getcwd()),

“specs”

)

%env SPECS_DIR=/workspace/tlt-experiments/yolo_v4/specs

Showing list of specification files.

!ls -rlt $LOCAL_SPECS_DIR


Please replace the variable with your key.
env: KEY=aGJuM2dxZGltbjgwaGNnb3Fxc2h0ZXBqZGk6MzlkYjAxY2EtZWE2OC00NGRiLWI5ZmUtZWRlNDZjMTI4MjA5
env: USER_EXPERIMENT_DIR=/workspace/tlt-experiments/yolo_v4
env: DATA_DOWNLOAD_DIR=/workspace/tlt-experiments/data
env: LOCAL_PROJECT_DIR=/home/vaaan/tlt-experiments
env: SPECS_DIR=/workspace/tlt-experiments/yolo_v4/specs
total 8
-rw------- 1 1001 1001 3277 Dec 17 07:06 yolo_v4_train_resnet18_kitti.txt
-rw------- 1 1001 1001 3175 Dec 17 07:06 yolo_v4_retrain_resnet18_kitti.txt



Mapping up the local directories to the TLT docker.

import json
mounts_file = os.path.expanduser("~/.tlt_mounts.json")

Define the dictionary with the mapped drives

drive_map = {
“Mounts”: [
# Mapping the data directory
{
“source”: os.environ[“LOCAL_PROJECT_DIR”],
“destination”: “/workspace/tlt-experiments”
},
# Mapping the specs directory.
{
“source”: os.environ[“LOCAL_SPECS_DIR”],
“destination”: os.environ[“SPECS_DIR”]
},
]
}

Writing the mounts file.

with open(mounts_file, “w”) as mfile:
json.dump(drive_map, mfile, indent=4)



`!cat ~/.tlt_mounts.json`

{
    "Mounts": [
        {
            "source": "/home/vaaan/tlt-experiments",
            "destination": "/workspace/tlt-experiments"
        },
        {
            "source": "/workspace/tlt-experiments/yolo_v4/specs",
            "destination": "/workspace/tlt-experiments/yolo_v4/specs"
        }
    ]
}


View the versions of the TLT launcher

!tlt info

/usr/local/lib/python3.6/dist-packages/tlt/__init__.py:20: DeprecationWarning: 
The `nvidia-tlt` package will be deprecated soon. Going forward please migrate to using the `nvidia-tao` package.

  warnings.warn(message, DeprecationWarning)
~/.tao_mounts.json wasn't found. Falling back to obtain mount points and docker configs from ~/.tlt_mounts.json.
Please note that this will be deprecated going forward.
Configuration of the TAO Toolkit Instance
dockers: ['nvidia/tao/tao-toolkit-tf', 'nvidia/tao/tao-toolkit-pyt', 'nvidia/tao/tao-toolkit-lm']
format_version: 2.0
toolkit_version: 3.21.11
published_date: 11/08/2021

verify

import os

DATA_DIR = os.environ.get(‘LOCAL_DATA_DIR’)
num_training_images = len(os.listdir(os.path.join(DATA_DIR, “training/image_2”)))
num_training_labels = len(os.listdir(os.path.join(DATA_DIR, “training/label_2”)))
num_testing_images = len(os.listdir(os.path.join(DATA_DIR, “testing/image_2”)))
print(“Number of images in the train/val set. {}”.format(num_training_images))
print(“Number of labels in the train/val set. {}”.format(num_training_labels))
print(“Number of images in the test set. {}”.format(num_testing_images))


---------------------------------------------------------------------------
FileNotFoundError                         Traceback (most recent call last)
<ipython-input-11-d9f1ad23d866> in <module>
      3 
      4 DATA_DIR = os.environ.get('LOCAL_DATA_DIR')
----> 5 num_training_images = len(os.listdir(os.path.join(DATA_DIR, "training/image_2")))
      6 num_training_labels = len(os.listdir(os.path.join(DATA_DIR, "training/label_2")))
      7 num_testing_images = len(os.listdir(os.path.join(DATA_DIR, "testing/image_2")))

FileNotFoundError: [Errno 2] No such file or directory: '/home/vaaan/tlt-experiments/data/training/image_2'

I am confused about your triggering the docker.
For triggering tao docker, usually there are two ways.

  1. Using tao-launcher, i.e. , just run something like below.

$ tao yolov4

  1. Using docker run, similar to your sharing command.

$ docker run --runtime=nvidia -it -v /home/vaaan/tlt-experiments:/workspace/tlt-experiments -p 8888:8888 nvcr.io/nvidia/tlt-streamanalytics:v3.0-py3 /bin/bash

But both of above two ways will not find your ‘/home/vaaan/tlt-experiments/data/training/image_2’ . The path should be a path inside the docker.
For 1), according to you ~/.tlt_mounts.json,

        "source": "/home/vaaan/tlt-experiments",
        "destination": "/workspace/tlt-experiments"

the available path should be /workspace/tlt-experiments/data/training/image_2

For 2), according to “-v /home/vaaan/tlt-experiments:/workspace/tlt-experiments”,
the available path should be also /workspace/tlt-experiments/data/training/image_2

i am saying before this i could run the tlt toolkit with no issues ,i changed cuda for running deepstream 6.0, after all that now when i run tlt it can’t mount
is this happening because of cuda dependency change???

when i run tao yolo_v4 i get this :

/home/vaaan/.local/lib/python3.7/site-packages/tlt/init.py:20: DeprecationWarning:
The nvidia-tlt package will be deprecated soon. Going forward please migrate to using the nvidia-tao package.

warnings.warn(message, DeprecationWarning)
2021-12-21 10:06:18,443 [INFO] root: Registry: [‘nvcr.io’]
2021-12-21 10:06:18,516 [INFO] tlt.components.instance_handler.local_instance: No commands provided to the launcher
Kicking off an interactive docker session.
NOTE: This container instance will be terminated when you exit.
Traceback (most recent call last):
File “/home/vaaan/.local/bin/tao”, line 8, in
sys.exit(main())
File “/home/vaaan/.local/lib/python3.7/site-packages/tlt/entrypoint/entrypoint.py”, line 115, in main
args[1:]
File “/home/vaaan/.local/lib/python3.7/site-packages/tlt/components/instance_handler/local_instance.py”, line 278, in launch_command
docker_handler.run_container(command)
File “/home/vaaan/.local/lib/python3.7/site-packages/tlt/components/docker_handler/docker_handler.py”, line 284, in run_container
mount_data, env_vars, docker_options = self._get_mount_env_data()
File “/home/vaaan/.local/lib/python3.7/site-packages/tlt/components/docker_handler/docker_handler.py”, line 112, in _get_mount_env_data
raise ValueError(“Mount point source path doesn’t exist. {}”.format(mount[‘source’]))
ValueError: Mount point source path doesn’t exist. /home/vaaan/YOUR_LOCAL_PROJECT_DIR_PATH

home/vaaan/.local/lib/python3.7/site-packages/tlt/init.py:20: DeprecationWarning:
The nvidia-tlt package will be deprecated soon. Going forward please migrate to using the nvidia-tao package.

warnings.warn(message, DeprecationWarning)
2021-12-21 09:56:55,336 [INFO] root: Registry: [‘nvcr.io’]
2021-12-21 09:56:55,416 [INFO] tlt.components.instance_handler.local_instance: No commands provided to the launcher
Kicking off an interactive docker session.
NOTE: This container instance will be terminated when you exit.
Traceback (most recent call last):
File “/home/vaaan/.local/bin/tao”, line 8, in
sys.exit(main())
File “/home/vaaan/.local/lib/python3.7/site-packages/tlt/entrypoint/entrypoint.py”, line 115, in main
args[1:]
File “/home/vaaan/.local/lib/python3.7/site-packages/tlt/components/instance_handler/local_instance.py”, line 278, in launch_command
docker_handler.run_container(command)
File “/home/vaaan/.local/lib/python3.7/site-packages/tlt/components/docker_handler/docker_handler.py”, line 284, in run_container
mount_data, env_vars, docker_options = self._get_mount_env_data()
File “/home/vaaan/.local/lib/python3.7/site-packages/tlt/components/docker_handler/docker_handler.py”, line 112, in _get_mount_env_data
raise ValueError(“Mount point source path doesn’t exist. {}”.format(mount[‘source’]))
ValueError: Mount point source path doesn’t exist. /home/vaaan/YOUR_LOCAL_PROJECT_DIR_PATH

i made a few changes now i get stuck here :

#If you use your own dataset, you will need to run the code below to generate the best anchor shape

!tlt yolo_v4 kmeans -l $DATA_DOWNLOAD_DIR/training/label_2 \
                     -i $DATA_DOWNLOAD_DIR/training/image_2 \
                     -n 9 \
                     -x 611 \
                     -y 1024

# The anchor shape generated by this script is sorted. Write the first 3 into small_anchor_shape in the config
# file. Write middle 3 into mid_anchor_shape. Write last 3 into big_anchor_shape.

/home/vaaan/.local/lib/python3.6/site-packages/tlt/init.py:20: DeprecationWarning:
The nvidia-tlt package will be deprecated soon. Going forward please migrate to using the nvidia-tao package.

warnings.warn(message, DeprecationWarning)
2021-12-21 12:27:50,878 [INFO] root: Registry: [‘nvcr.io’]
2021-12-21 12:27:50,960 [INFO] tlt.components.instance_handler.local_instance: Running command in container: nvcr.io/nvidia/tao/tao-toolkit-tf:v3.21.11-tf1.15.5-py3
/usr/lib/python3/dist-packages/apport/report.py:13: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module’s documentation for alternative uses
import fnmatch, glob, traceback, errno, sys, atexit, locale, imp, stat
Traceback (most recent call last):
File “/home/vaaan/.local/bin/tlt”, line 8, in
sys.exit(main())
File “/home/vaaan/.local/lib/python3.6/site-packages/tlt/entrypoint/entrypoint.py”, line 115, in main
args[1:]
File “/home/vaaan/.local/lib/python3.6/site-packages/tlt/components/instance_handler/local_instance.py”, line 319, in launch_command
docker_handler.run_container(command)
File “/home/vaaan/.local/lib/python3.6/site-packages/tlt/components/docker_handler/docker_handler.py”, line 284, in run_container
mount_data, env_vars, docker_options = self._get_mount_env_data()
File “/home/vaaan/.local/lib/python3.6/site-packages/tlt/components/docker_handler/docker_handler.py”, line 112, in _get_mount_env_data
raise ValueError(“Mount point source path doesn’t exist. {}”.format(mount[‘source’]))
ValueError: Mount point source path doesn’t exist. /home/vaaan/tlt-experiments/yolo_v4/YOUR_LOCAL_PROJECT_DIR_PATH

Note: i strongly feel it has to do something with docker ,as i have the old jupyter notebook log working on the same exact specfications

Please try below.

pip3 uninstall nvidia-tlt
pip3 install nvidia-tao

See Migrating to TAO Toolkit — TAO Toolkit 3.21.11 documentation

More, please share your ~/.tao_mounts.json .

still the same error
here is my !cat ~/.tlt_mounts.json

{
“Mounts”: [
{
“source”: “/home/vaaan/tlt-experiments”,
“destination”: “/workspace/tlt-experiments”
},
{
“source”: “/home/vaaan/tlt_cv_samples_vv1.0.1/specs”,
“destination”: “/workspace/tlt-experiments/yolo_v4/specs”
}
]
}

Can you run below successfully?
$ docker run --runtime=nvidia -it --rm -v /home/vaaan/tlt-experiments:/workspace/tlt-experiments nvcr.io/nvidia/tao/tao-toolkit-tf:v3.21.11-tf1.15.5-py3 /bin/bash

Yes i can

Then, please run
# ls /workspace/tlt-experiments

It should contain the same files as your local folder “/home/vaaan/tlt-experiments”.

using this i could not open the jupyter notebook tho…
i when i entered
jupyter notebook --ip 0.0.0.0 --port 8888 --allow-root

This site can’t be reachedThe web page at http://127.0.0.1:8888/?token=143212a584b9315e73763db741a7adb866ff79e0a05a6694 might be temporarily down or it may have moved permanently to a new web address.
ERR_SOCKET_NOT_CONNECTED

mabye the port is already in use

when i try ls /workspace/tlt-experiments

01_data data_object_label_2.zip test.py
TestingTLT data_pothole test.sh
Untitled.ipynb detectnet_v2 yolo
data ngccli yolo_v4
data_1 ngccli_linux.zip ‘yolo_v4 _pothole’
data_object_image_2.zip old_data

So, there is no issue for the files mounting.

Maybe you already mount a docker using 8888 port.
So, please run below to find the docker which is using port 8888.
$ docker ps

Then kill the docker
$ docker rm -fv <docker id>

Then, trigger previous command again while adding -p 8888:8888
$ docker run --runtime=nvidia -it --rm -v /home/vaaan/tlt-experiments:/workspace/tlt-experiments -p 8888:8888 nvcr.io/nvidia/tao/tao-toolkit-tf:v3.21.11-tf1.15.5-py3 /bin/bash

Then, # jupyter notebook --ip 0.0.0.0 --allow-root

after doing the kill docker my system is taking a long time to ngc config set
after that
docker login nvcr.io
Authenticating with existing credentials…
Login did not succeed, error: Error response from daemon: Get https://nvcr.io/v2/: Get https://nvcr.io/proxy_auth?account=%24oauthtoken&client_id=docker&offline_token=true: net/http: TLS handshake timeout
Username ($oauthtoken): $oauthtoken

apparently cant enter the password
Password:
Error response from daemon: Get https://nvcr.io/v2/: Get https://nvcr.io/proxy_auth?account=%24oauthtoken&client_id=docker&offline_token=true: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) (Client.Timeout exceeded while awaiting headers)

i had a slow internet ,now i can open the jupyter notebook as you asked

OK, so you have no issue when run the 2nd method for triggering tao docker.

yes i can run the command without any problem,let me check the dependency ,can u tell me how to install NVIDIA runtime the site was down last time i checked

Since you already run 2nd way successfully, there is no issue for NVIDIA runtime.
For 1st way, please double check TAO Toolkit Launcher — TAO Toolkit 3.21.11 documentation

trying CLI launcher method
tao yolov4

i get this

/home/vaaan/.local/lib/python3.6/site-packages/tlt/init.py:20: DeprecationWarning:
The nvidia-tlt package will be deprecated soon. Going forward please migrate to using the nvidia-tao package.

warnings.warn(message, DeprecationWarning)
2021-12-22 12:43:09,093 [INFO] root: Registry: [‘nvcr.io’]
2021-12-22 12:43:09,171 [INFO] tlt.components.instance_handler.local_instance: Running command in container: nvcr.io/nvidia/tao/tao-toolkit-tf:v3.21.11-tf1.15.5-py3
2021-12-22 12:43:09,171 [INFO] tlt.components.instance_handler.local_instance: No commands provided to the launcher
Kicking off an interactive docker session.
NOTE: This container instance will be terminated when you exit.
/usr/lib/python3/dist-packages/apport/report.py:13: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module’s documentation for alternative uses
import fnmatch, glob, traceback, errno, sys, atexit, locale, imp, stat
Traceback (most recent call last):
File “/home/vaaan/.local/bin/tao”, line 8, in
sys.exit(main())
File “/home/vaaan/.local/lib/python3.6/site-packages/tlt/entrypoint/entrypoint.py”, line 115, in main
args[1:]
File “/home/vaaan/.local/lib/python3.6/site-packages/tlt/components/instance_handler/local_instance.py”, line 319, in launch_command
docker_handler.run_container(command)
File “/home/vaaan/.local/lib/python3.6/site-packages/tlt/components/docker_handler/docker_handler.py”, line 284, in run_container
mount_data, env_vars, docker_options = self._get_mount_env_data()
File “/home/vaaan/.local/lib/python3.6/site-packages/tlt/components/docker_handler/docker_handler.py”, line 112, in _get_mount_env_data
raise ValueError(“Mount point source path doesn’t exist. {}”.format(mount[‘source’]))
ValueError: Mount point source path doesn’t exist. /home/vaaan/YOUR_LOCAL_PROJECT_DIR_PATH

How about
$ tlt yolov4

tlt yolo_v4

/home/vaaan/.local/lib/python3.6/site-packages/tlt/init.py:20: DeprecationWarning:
The nvidia-tlt package will be deprecated soon. Going forward please migrate to using the nvidia-tao package.

warnings.warn(message, DeprecationWarning)
2021-12-22 12:53:18,582 [INFO] root: Registry: [‘nvcr.io’]
2021-12-22 12:53:18,663 [INFO] tlt.components.instance_handler.local_instance: Running command in container: nvcr.io/nvidia/tao/tao-toolkit-tf:v3.21.11-tf1.15.5-py3
2021-12-22 12:53:18,664 [INFO] tlt.components.instance_handler.local_instance: No commands provided to the launcher
Kicking off an interactive docker session.
NOTE: This container instance will be terminated when you exit.
/usr/lib/python3/dist-packages/apport/report.py:13: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module’s documentation for alternative uses
import fnmatch, glob, traceback, errno, sys, atexit, locale, imp, stat
Traceback (most recent call last):
File “/home/vaaan/.local/bin/tlt”, line 8, in
sys.exit(main())
File “/home/vaaan/.local/lib/python3.6/site-packages/tlt/entrypoint/entrypoint.py”, line 115, in main
args[1:]
File “/home/vaaan/.local/lib/python3.6/site-packages/tlt/components/instance_handler/local_instance.py”, line 319, in launch_command
docker_handler.run_container(command)
File “/home/vaaan/.local/lib/python3.6/site-packages/tlt/components/docker_handler/docker_handler.py”, line 284, in run_container
mount_data, env_vars, docker_options = self._get_mount_env_data()
File “/home/vaaan/.local/lib/python3.6/site-packages/tlt/components/docker_handler/docker_handler.py”, line 112, in _get_mount_env_data
raise ValueError(“Mount point source path doesn’t exist. {}”.format(mount[‘source’]))
ValueError: Mount point source path doesn’t exist. /home/vaaan/YOUR_LOCAL_PROJECT_DIR_PATH

How about
$ tao info --verbose
and
$ tlt info --verbose

tao info --verbose
/home/vaaan/.local/lib/python3.6/site-packages/tlt/init.py:20: DeprecationWarning:
The nvidia-tlt package will be deprecated soon. Going forward please migrate to using the nvidia-tao package.

warnings.warn(message, DeprecationWarning)
Configuration of the TAO Toolkit Instance

dockers:
nvidia/tao/tao-toolkit-tf:
v3.21.11-tf1.15.5-py3:
docker_registry: nvcr.io
tasks:
1. augment
2. bpnet
3. classification
4. dssd
5. emotionnet
6. efficientdet
7. fpenet
8. gazenet
9. gesturenet
10. heartratenet
11. lprnet
12. mask_rcnn
13. multitask_classification
14. retinanet
15. ssd
16. unet
17. yolo_v3
18. yolo_v4
19. yolo_v4_tiny
20. converter
v3.21.11-tf1.15.4-py3:
docker_registry: nvcr.io
tasks:
1. detectnet_v2
2. faster_rcnn
nvidia/tao/tao-toolkit-pyt:
v3.21.11-py3:
docker_registry: nvcr.io
tasks:
1. speech_to_text
2. speech_to_text_citrinet
3. text_classification
4. question_answering
5. token_classification
6. intent_slot_classification
7. punctuation_and_capitalization
8. spectro_gen
9. vocoder
10. action_recognition
nvidia/tao/tao-toolkit-lm:
v3.21.08-py3:
docker_registry: nvcr.io
tasks:
1. n_gram
format_version: 2.0
toolkit_version: 3.21.11
published_date: 11/08/2021

tlt info --verbose
/home/vaaan/.local/lib/python3.6/site-packages/tlt/init.py:20: DeprecationWarning:
The nvidia-tlt package will be deprecated soon. Going forward please migrate to using the nvidia-tao package.

warnings.warn(message, DeprecationWarning)
Configuration of the TAO Toolkit Instance

dockers:
nvidia/tao/tao-toolkit-tf:
v3.21.11-tf1.15.5-py3:
docker_registry: nvcr.io
tasks:
1. augment
2. bpnet
3. classification
4. dssd
5. emotionnet
6. efficientdet
7. fpenet
8. gazenet
9. gesturenet
10. heartratenet
11. lprnet
12. mask_rcnn
13. multitask_classification
14. retinanet
15. ssd
16. unet
17. yolo_v3
18. yolo_v4
19. yolo_v4_tiny
20. converter
v3.21.11-tf1.15.4-py3:
docker_registry: nvcr.io
tasks:
1. detectnet_v2
2. faster_rcnn
nvidia/tao/tao-toolkit-pyt:
v3.21.11-py3:
docker_registry: nvcr.io
tasks:
1. speech_to_text
2. speech_to_text_citrinet
3. text_classification
4. question_answering
5. token_classification
6. intent_slot_classification
7. punctuation_and_capitalization
8. spectro_gen
9. vocoder
10. action_recognition
nvidia/tao/tao-toolkit-lm:
v3.21.08-py3:
docker_registry: nvcr.io
tasks:
1. n_gram
format_version: 2.0
toolkit_version: 3.21.11
published_date: 11/08/2021