The input device is not a TTY

Hello,

Please I got this error when trying to run this command:
os.system (“/ home / sylia / .local / bin / tlt detectnet_v2 dataset_convert -d configData -o kittiTrain”)

2021-04-07 10:44:55,411 [WARNING] tlt.components.docker_handler.docker_handler:
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the “user”:“UID:GID” in the
DockerOptions portion of the ~/.tlt_mounts.json file. You can obtain your
users UID and GID by using the “id -u” and “id -g” commands on the
terminal.
the input device is not a TTY
2021-04-07 10:44:55,921 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.

To narrow down, can you run tlt detectnet_v2 dataset_convert -d configData -o kittiTrain successfully?

When I run this command: " tlt detectnet_v2 dataset_convert -d /configData/configData.json -o kittiTrain "

it can’t find my configuration file !

Matplotlib created a temporary config/cache directory at /tmp/matplotlib-x1yfd27m because the default path (/.config/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing.
Using TensorFlow backend.
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Using TensorFlow backend.
Traceback (most recent call last):
File “/home/vpraveen/.cache/dazel/_dazel_vpraveen/216c8b41e526c3295d3b802489ac2034/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/detectnet_v2/scripts/dataset_convert.py”, line 90, in
File “/home/vpraveen/.cache/dazel/_dazel_vpraveen/216c8b41e526c3295d3b802489ac2034/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/detectnet_v2/scripts/dataset_convert.py”, line 82, in main
FileNotFoundError: [Errno 2] No such file or directory: ‘/configData/configData.json’

Please note that /configData/configData.json should be available inside the docker.
You can login the docker to debug.

Reference: Tlt 3.0 - #2 by Morganh

I followed the indications mentioned on this link TLT Launcher — Transfer Learning Toolkit 3.0 documentation
then I retrieve the docker image: docker pull nvcr.io/nvidia/tlt-streamanalytics:v3.0-dp-py3
i mapped my local folder with tlt_mount.json
then launched the container: docker run --runtime = nvidia -it -v antt: / workspace / tlt_experiments nvcr.io/nvidia/tlt-streamanalytics:v3.0-dp-py3 / bin / bash

is that what to do please?

tlt_mounts.json
{
“Mounts”: [
{
“source”: “/home/sylia/antt”,
“destination”: “/workspace/tlt-experiments”
},

    {
        "source": "/home/sylia/antt/Data/Work/resnet18/config",
        "destination": "/workspace/tlt-experiments/specs"
    }


],

"DockerOptions": {
    "shm_size": "16G",
    "ulimits": {
        "memlock": -1,
        "stack": 67108864
    },
    "user": "1000:1000"
}

}

As I mentioned, the easier way for you to debug is to login the docker.
$ docker run --runtime=nvidia -it nvcr.io/nvidia/tlt-streamanalytics:v3.0-dp-py3 /bin/bash
Then
# ls /workspace/tlt-experiments

The content should be the content of /home/sylia/antt.
Because you map your local /home/sylia/antt to /workspace/tlt-experiments.

I am afraid your -d /configData/configData.json is not correct.

yes my directory antt/ is in /workspace/tlt_experiments/
image

but when I type tlt -h in “root @ 650211afabc3:/workspace/tlt_experiments#”
tlt: command not found
knowing that the bash the command exists
I do not understand when you said must debug with docker? please

When you login the docker, where is configData.json ?
Is it /workspace/tlt_experiments/configData/configData.json ?

image

So, you need to modify your command to

tlt detectnet_v2 dataset_convert -d /workspace/tlt-experiments/configData.json -o kittiTrain

Returns me that tlt command no find ?!

Hey,
The tlt commmand should run on host PC instead of inside the docker.
See Migrating to TLT 3.0 — Transfer Learning Toolkit 3.0 documentation

Thank you for your answers
One last question please, why when we shut down the pc we lose the docker image and we have to redo all the manipulation? docker pull image …
is there another way to keep the image?

thank you

No, if you install tlt launcher correctly, the tlt command should be available even you reboot your PC.
If not, you can try to activate it.
$ source your_venv_folder/bin/activate

And the docker should be available too.

See TLT Launcher — Transfer Learning Toolkit 3.0 documentation

No tlt still present, just the docker image I can’t find after restarting

Can you run below command successfully?
$ tlt detectnet_v2 train --help

As mentioned in TLT Launcher — Transfer Learning Toolkit 3.0 documentation ,

When the user executes a command, for example tlt detectnet_v2 train --help , the TLT launcher does the following:

  1. Pulls the required TLT container with the entrypoint for DetectNet_v2
  2. Creates an instance of the container
  3. Runs the detectnet_v2 entrypoint with the train sub-task

Yes the command is launched

OK, if you ever docker pull image , the image should be available even you reboot the PC.