Tao classification train -e ./specs/classification_spec.cfg -r ./ -k error

Oh, why did you set so many source and destination?

According to your folder structure, below is enough.
“source” : “/home/ncp/tao/cv_samples_v1.2.0/classification”,
“destination” : “/home/ncp/tao/cv_samples_v1.2.0/classification”

Then when you login the docker, you can find all the files under folder /home/ncp/tao/cv_samples_v1.2.0/classification

My goodness, I did it exactly 1:1 according to the official documents. I really didn’t understand why the official website gave 3 here. Can you talk about it roughly? Now I can train, thank you for your support

The official user guide is also fine. Because if there are 3 different folders, end user can set 3 “source and destination”. Anyway, it just tells end user the examples how to map the local directories to the docker.

The implication is that if I have three different project directories, I can refer to the tao.json template given by the official website. Just set up a local mapping docker for the current directory.

When I export the model, it says that fp16 is not supported. How do I want to export fp16? I am use gtx1070ti

If a gpu card does not support fp16, then tlt/tao will not export fp16.

Then, the etlt model I trained and exported on this server can be converted to fp16 format with Tao tool on jetson ax or nano?

Yes, in nx or nano, you can download tao-converter(jetson version) , and copy etlt model in it, then generate fp16 tensorrt engine.

HI !
I don’t know why, I retrained other models to prompt permission issues

File “/usr/lib/python3.6/os.py”, line 220, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: ‘./weights’

Please create a new topic and share the full command and full log. Thanks.

ok