Tlt-converter throws error 'std::invalid_argument'

Im running the TLT-converter on my Xavier NX for the 'tlt_gesturenet_deployable 'example and Im getting this error.

(tlt3) nx@nx-desktop:/opt/nvidia/deepstream/deepstream-5.1/sources/tlt_gesturenet_deployable_v1.0$ ./tlt-converter -k nvidia_tlt -d input_1,1x3x160x160,1x3x160x160,2x160x160 -t fp16 -e /model.plan /model.etlt
terminate called after throwing an instance of ‘std::invalid_argument’
what(): stoi
Aborted (core dumped)

Heres usage arguments:

(tlt3) nx@nx-desktop:/opt/nvidia/deepstream/deepstream-5.1/sources/tlt_gesturenet_deployable_v1.0$ ./tlt-converter -h
usage: ./tlt-converter [-h] [-v] [-e ENGINE_FILE_PATH]
[-k ENCODE_KEY] [-c CACHE_FILE]
[-o OUTPUTS] [-d INPUT_DIMENSIONS]
[-b BATCH_SIZE] [-m MAX_BATCH_SIZE]
[-w MAX_WORKSPACE_SIZE] [-t DATA_TYPE]
[-i INPUT_ORDER]
input_file

Generate TensorRT engine from exported model

positional arguments:
input_file Input file (.etlt exported model).

required flag arguments:
-d comma separated list of input dimensions
-k model encoding key

optional flag arguments:
-b calibration batch size (default 8)
-c calibration cache file (default cal.bin)
-e file the engine is saved to (default saved.engine)
-i input dimension ordering – nchw, nhwc, nc (default nchw)
-m maximum TensorRT engine batch size (default 16). If meet with out-of-memory issue, please decrease the batch size accordingly
-o comma separated list of output node names (default none)
-t TensorRT data type – fp32, fp16, int8 (default fp32)
-w maximum workspace size of TensorRT engine (default 1<<30). If meet with out-of-memory issue, please increase the workspace size accordingly

Please refer to How to find the input/output layers names of tlt/etlt model - #8 by Morganh

It seems the TLT-converter that is associated to down load links is different the the tlt-converter associated with the
“Using NVIDIA Pre-Trained Models and Transfer Learning Toolkit 3.0 to Create Gesture-Based Interactions with a Robot” webinar. Here is pic of tlt-converter associated with webinar

Here is TLT-converter from webinar:
./tlt-converter -k nvidia _tlt -p input_1 1x3x160x160, 1x3x160x160, 2x3x160x160, -t fp16
-e ~/projects/model.plan ~/projects/model.etlt.

This tlt-converter uses the -p argument.
The download links to tlt-transfer do not have the -p argument

Please download tlt-converter from Overview — TAO Toolkit 3.22.05 documentation

Thanks
I see they also have a link on the
" Using NVIDIA Pre-trained Models and Transfer Learning Toolkit 3.0 to Create Gesture-based Interactions with a Robot"
Git hub repo

Righteous

I hate to keep bugging you but
When I run this I get: no input error dimensions giving error.

nx@nx-desktop:~/cuda10.2_trt7.1_jp4.5$ ./tlt-converter -k nvidia_tlt -p input_1,1x3x160x160,1x3x160x160,2x3x160x160 -t fp16 -e /model.plan /model.etlt
Error: no input dimensions given

the arguments say that no input dimension are required when running TLT 3 models

nx@nx-desktop:~/cuda10.2_trt7.1_jp4.5$ ./tlt-converter -h
usage: ./tlt-converter [-h] [-v] [-e ENGINE_FILE_PATH]
[-k ENCODE_KEY] [-c CACHE_FILE]
[-o OUTPUTS] [-d INPUT_DIMENSIONS]
[-b BATCH_SIZE] [-m MAX_BATCH_SIZE]
[-w MAX_WORKSPACE_SIZE] [-t DATA_TYPE]
[-i INPUT_ORDER] [-s] [-u DLA_CORE]
input_file

Generate TensorRT engine from exported model

positional arguments:
input_file Input file (.etlt exported model).

required flag arguments:
-d comma separated list of input dimensions(not required for TLT 3.0 new models).
-k model encoding key.

optional flag arguments:
-b calibration batch size (default 8).
-c calibration cache file (default cal.bin).
-e file the engine is saved to (default saved.engine).
-i input dimension ordering – nchw, nhwc, nc (default nchw).
-m maximum TensorRT engine batch size (default 16). If meet with out-of-memory issue, please decrease the batch size accordingly.
-o comma separated list of output node names (default none).
-p comma separated list of optimization profile shapes in the format <input_name>,<min_shape>,<opt_shape>,<max_shape>, where each shape has the format: xxx. Can be specified multiple times if there are multiple input tensors for the model. This argument is only useful in dynamic shape case.
-s TensorRT strict_type_constraints flag for INT8 mode(default false).
-t TensorRT data type – fp32, fp16, int8 (default fp32).
-u Use DLA core N for layers that support DLA(default = -1, which means no DLA core will be utilized for inference. Note that it’ll always allow GPU fallback).
-w maximum workspace size of TensorRT engine (default 1<<30). If meet with out-of-memory issue, please increase the workspace size accordingly.

Yes, in tlt_cv_compile.sh, it does not need to set “-d” argument for gesture.etlt.

  tlt-converter -k ${ENCODING_KEY} -t fp16 \
        -p input_1,1x3x160x160,1x3x160x160,2x3x160x160 \
        -e /models/triton_model_repository/hcgesture_tlt/1/model.plan \
        /models/tlt_cv_gesture_v${tlt_jarvis_ngc_version}/gesture.etlt

What’s your model.etlt, is it gesture.etlt?

Got it
I was Pointing at wrong folder containing .etlt file

Thanks

1 Like