I have a cluster with containers that can run generic ML jobs. They use lambda-stack base images, but the cluster includes the nvidia containers.
How can I run tao within these containers? tao tries to startup a new docker container within the container, which doesn’t doesn’t seem like a good pattern.
It is possible to run tao container within another container.
$ docker run --runtime=nvidia -it --rm -v /var/run/docker.sock:/var/run/docker.sock nvcr.io/nvidia/tao/tao-toolkit-tf:v3.21.11-tf1.15.5-py3 /bin/bash
Is there any way to run without using docker within another docker container?
Please note that TAO does not startup its docker container within the container.
For example, after running "$ tao ssd " ,
It will run inside a tao container. But it is not running within another container.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.