No Such Container (Docker Container) in TAO Example Code Run

I tried to run an example code, “facenet.ipynb” from Nvidia, but I have the error of “docker.errors.NotFound: 404 Client Error: Not Found No such container”.

What steps do I possibly miss?

Code block in the example code:

!tao detectnet_v2 dataset_convert \
                  -d $LOCAL_SPECS_DIR/facenet_tfrecords_kitti_train.txt \
                  -o $LOCAL_DATA_DIR/tfrecords/training/kitti_train

Error Message:

/usr/lib/python3/dist-packages/requests/ RequestsDependencyWarning: urllib3 (1.26.13) or chardet (3.0.4) doesn’t match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn’t match a supported "
2022-12-22 16:29:23,314 [INFO] root: Registry: [‘’]
2022-12-22 16:29:23,369 [INFO] tlt.components.instance_handler.local_instance: Running command in container:
Error response from daemon: Container 2f0469e240ac727af4e635920883dfc60c0f472b5f8621f895e20f90546b446e is not running
2022-12-22 16:29:24,229 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.
Traceback (most recent call last):
File “/home/isi/.local/lib/python3.8/site-packages/docker/api/”, line 259, in _raise_for_status
File “/usr/lib/python3/dist-packages/requests/”, line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http+docker://localhost/v1.41/containers/2f0469e240ac727af4e635920883dfc60c0f472b5f8621f895e20f90546b446e/stop

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/home/isi/.local/bin/tao”, line 8, in
File “/home/isi/.local/lib/python3.8/site-packages/tlt/entrypoint/”, line 114, in main
File “/home/isi/.local/lib/python3.8/site-packages/tlt/components/instance_handler/”, line 319, in launch_command
File “/home/isi/.local/lib/python3.8/site-packages/tlt/components/docker_handler/”, line 315, in run_container
File “/home/isi/.local/lib/python3.8/site-packages/tlt/components/docker_handler/”, line 322, in stop_container
File “/home/isi/.local/lib/python3.8/site-packages/docker/models/”, line 436, in stop
return self.client.api.stop(, **kwargs)
File “/home/isi/.local/lib/python3.8/site-packages/docker/utils/”, line 19, in wrapped
return f(self, resource_id, *args, **kwargs)
File “/home/isi/.local/lib/python3.8/site-packages/docker/api/”, line 1167, in stop
File “/home/isi/.local/lib/python3.8/site-packages/docker/api/”, line 261, in _raise_for_status
raise create_api_error_from_http_exception(e)
File “/home/isi/.local/lib/python3.8/site-packages/docker/”, line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.NotFound: 404 Client Error: Not Found (“No such container: 2f0469e240ac727af4e635920883dfc60c0f472b5f8621f895e20f90546b446e”)

End of error message

Additionally two more considerations:
     I logged in “” using $ docker login
     $ tao detectnet_v2 run /bin/bash also gives the same error with above

Runtime Environment:
     Hardware: AGX ORIN
     Network Type: Detectnet_v2
     tao version: format 2.0, toolkit 4.0.0

TAO Toolkit is supported on discrete GPUs, such as A100, A40, A30, A2, A16, A100x, A30x, V100, T4, Titan-RTX and Quadro-RTX. There is no corresponding docker for Jetson.

@yingliu Thanks for the information.

I actually didn’t expect Nvidia TAO Toolkit not supporting Nvidia Jason device.

I have three additional questions:

     First, on the page 15 in the official AGX Orin manual and the facenet mentioned the TAO Toolkit implementation on Jetson hardware. Is there other neural network model (in TAO Toolkit) that I could use in the Orin?

     Second, isn’t the Docker Image depending on kernel type rather than hardware (I thought Docker is hardware independent)? I see the Orin has
     Operating System: Ubuntu 20.04.5 LTS
     Kernel: Linux 5.10.65-tegra
     Architecture: arm64

     Lastly, if I want to run “detectnet_v2” (especially facenet) without Docker Image in Orin, is there proper step I could do that?

TAO is designed on training on the platform with x86-based CPU and discrete GPUs.

After training, user can deploy to model to run inference on discrete GPUs or any Jetson devices(for example, Orin, Xaiver, etc)

So, if you run training, it is not supported in Jetson devices. If you run inference, it is supported in both dgpu devices and Jetson devices.

@Morganh Thanks for detailed information.

If the TAO Toolkit is for mainly training, how to deploy (convert the training weight and neural network) to Jetson device? Please, share a guide line or document.

     Is the step: training → tao detectnet_v2 exporttao converter?
         Reference code

For running inference with TAO models in any dgpu devices or Jetson devices, there are several ways.

  1. If you want to run with deepstream, please follow DetectNet_v2 — TAO Toolkit 4.0 documentation or refer to GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream directly. Users can config the .etlt model in the config file. Actually, users can also config the tensorrt engine in the config file.
  2. Refer to Integrating TAO CV Models with Triton Inference Server — TAO Toolkit 4.0 documentationGitHub - NVIDIA-AI-IOT/tao-toolkit-triton-apps: Sample app code for deploying TAO Toolkit trained models to Triton
  3. Users can use tao-converter to generate tensorrt engine based on the .etlt model. Then write standalone code to consume the tensorrt engine.

@Morganh Thanks for detailed information.

I will try third one first because it looks like shortest path to implement.

Please, close my question. (I couldn’t close by myself)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.