Text Classification using Train Adapt Optimize (TAO) Toolkit

Hi

I’m running the DLI lab: Text Classification using Train Adapt Optimize (TAO) Toolkit. I’m using the hosted environment. When I run the jupyter notebook, it doesn’t download the spec file so I can’t run the tutorial.

!tao text_classification download_specs
-r $RESULTS_DIR
-o $SPECS_DIR

When I first use the command it says it can’t find a local container and goes and fetches one, this is the container that it fetches: nvcr.io/nvidia/tao/tao-toolkit-pyt:v3.21.11-py3. It’s this container that is then stored locally and is used throughout the notebook.

When I run the tao text_classification download_specs, the container stops before it has completed, that’s why there is no spec file: 2022-09-22 16:21:07,966 [INFO] tlt.components.docker_handler.docker_handler: Stopping container. All subsequent cells in the notebook don’t complete their execution because the container stops.

Any way to fix this?

Updating nvidia-tao from the terminal worked. See (Tlt.components.docker_handler.docker_handler: Stopping container - #10 by Morganh)

Yes, please install latest nvidia-tao. Thanks.