Tao: error: invalid choice: 'tlt-converter'

TensorRT Version7.2.1.6
Quadro RTX 5000 dual GPU
Driver Version: 455.23.05
CUDA Version: 11.1
Ubuntu 18.04
python 3.6
Yolo_v4

nvidia/tao/tao-toolkit-tf:
docker_registry: nvcr.io
docker_tag: v3.21.08-py3

I am converting etlt to trt and using yolo v4 sample

> # Convert to TensorRT engine (INT8).
> !tlt tlt-converter  -k $KEY  \
>                     -d 3,384,1248 \
>                     -o BatchedNMS \
>                     -c $USER_EXPERIMENT_DIR/export/cal.bin \
>                     -e $USER_EXPERIMENT_DIR/export/trt.engine \
>                     -b 8 \
>                     -m 16 \
>                     -t int8 \
>                     -i nchw \
>                     $USER_EXPERIMENT_DIR/export/yolov4_resnet18_epoch_070.etlt

/home/vaaan/.local/lib/python3.7/site-packages/tlt/init.py:20: DeprecationWarning:
The nvidia-tlt package will be deprecated soon. Going forward please migrate to using the nvidia-tao package.

warnings.warn(message, DeprecationWarning)
~/.tao_mounts.json wasn’t found. Falling back to obtain mount points and docker configs from ~/.tlt_mounts.json.
Please note that this will be deprecated going forward.
usage: tao [-h]
{list,stop,info,augment,bpnet,classification,converter,detectnet_v2,dssd,emotionnet,faster_rcnn,fpenet,gazenet,gesturenet,heartratenet,intent_slot_classification,lprnet,mask_rcnn,multitask_classification,n_gram,punctuation_and_capitalization,question_answering,retinanet,speech_to_text,speech_to_text_citrinet,ssd,text_classification,token_classification,unet,yolo_v3,yolo_v4}

tao: error: invalid choice: ‘tlt-converter’ (choose from ‘list’, ‘stop’, ‘info’, ‘augment’, ‘bpnet’, ‘classification’, ‘converter’, ‘detectnet_v2’, ‘dssd’, ‘emotionnet’, ‘faster_rcnn’, ‘fpenet’, ‘gazenet’, ‘gesturenet’, ‘heartratenet’, ‘intent_slot_classification’, ‘lprnet’, ‘mask_rcnn’, ‘multitask_classification’, ‘n_gram’, ‘punctuation_and_capitalization’, ‘question_answering’, ‘retinanet’, ‘speech_to_text’, ‘speech_to_text_citrinet’, ‘ssd’, ‘text_classification’, ‘token_classification’, ‘unet’, ‘yolo_v3’, ‘yolo_v4’)

I am getting this error. can you help me solve this

Please follow latest user guide. Migrating to TAO Toolkit — TAO Toolkit 3.0 documentation
For v3.21.08-py3 model, it is ‘converter’.
The “tlt-converter” is the old one of “converter” inside the docker.

we can run inference using etlt right on my pc using deepstream

Yes, user can deploy etlt model in deepstream and run inference.

do you have any documentation how to run inference on yoloV4 on deepstream

https://docs.nvidia.com/tao/tao-toolkit/text/object_detection/yolo_v4.html#deploying-to-deepstream

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.