Tlt-infer unrecognized arguments

Hello everyone, my name is Jose Luis. After failing to get good results with DIGITS, opt to try TLT. Apparently the training went well, but I ran into problems when making inference with “tlt-infer” on test images. The strange thing is that it doesn’t recognize the “model” input argument. I am running everything from the container nvcr.io/nvidia/tlt-streamanalytics:v2.0_dp_py2. I have looked in forums, github, medium, examples in nowhere I have found someone with this problem, I do not know what may be happening.Here I leave the tests carried out and the error message.

Test 1: -m


Test 2: --model

PS: the routes are correct, I already validated it. Also, the model.step worked for “tlt-evaluate detectnet_v2”.

For tlt-infer, there is no “-m” argument. Please see more details in tlt user guide or refer to detectnet_v2 jupyter notebook.

mmm but how about this?

Image cartured from https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html#inference_detectnet_v2.

am I misinterpreting the documentation?

Oh, there should be something wrong in the user guide. I will sync with internal team about it.

1 Like

On the other hand and to register, effectively in the jupyter notebook, there is no parameter -m. I assume it is this way.

Thanks @Morganh!!

Just a reminder, for detectnet_2, there is no “-m”.
But for some other networks, there is “-m”.

So, one tip is that you can run as below in docker for help.

For example,

root@6c7a2a9e24cd:/workspace# tlt-infer detectnet_v2
Using TensorFlow backend.
usage: tlt-infer [-h] -e INFERENCE_SPEC -i INFERENCE_INPUT -k KEY -o
INFERENCE_OUTPUT [-v]
tlt-infer: error: argument -e/–inference_spec is required
root@6c7a2a9e24cd:/workspace# tlt-infer ssd
Using TensorFlow backend.
usage: tlt-infer [-h] -m MODEL -i IN_IMAGE_DIR -o OUT_IMAGE_DIR -k KEY -e
CONFIG_PATH [-l OUT_LABEL_DIR] [-t DRAW_CONF_THRES]
tlt-infer: error: argument -m/–model is required

1 Like