Hello ,
I want to run the peoplenet model on an image in python environment with detecting bags ,
as mentioned in this link: https://forums.developer.nvidia.com/t/peoplenet-not-detecting-bags/230016 , I have downloaded peoplenet unpruned v2.0 resnet 34, now I want to convert it to trt engine and then run inference on it. please show me the ways to do, either with Docker or without docker
Thanks Morganh, But I have the .tlt model , and through my search, I have found that firstly the model should be converted to .etlt one. If so, please let me know how to do it.
Before TAO5.0, the model is exported to an .etlt format. You can config it into tlt-encoded-model=xxx.etlt
.
Since TAO5.0, the model can be exported to an .onnx format. You can add one line and config it as onnx-file=xxx.onnx
.
this is my convert script in nvidia-tao 4.0
tao detectnet_v2 export
-m $tlt_path
-e $SPECS_DIR/detectnet_v2_retrain_resnet18_kitti.txt
-o $etlt_path
-k $KEY
–gen_ds_config
please let me know how should I change it ?
Since you are using TAO4.0, you can follow 4.0 user guide DetectNet_v2 - NVIDIA Docs to export to an etlt file.
I have followed the instructions, but it resulted the error:
Loading uff directly from the package source code
usage: detectnet_v2 [-h] [–gpu_index GPU_INDEX] [–log_file LOG_FILE] {evaluate,gen_trt_engine,inference} …
detectnet_v2: error: invalid choice: ‘export’ (choose from ‘evaluate’, ‘gen_trt_engine’, ‘inference’)
I have install nvidia-tao with
pip install nvidia-tao==4.0.0
I use python 3.8
and have
nvidia-pyindex 1.0.9
nvidia-tao 4.0.1
nvidia-tao-deploy 4.0.0.1
nvidia-tensorrt 8.4.1.5
(because I want to convert to engine and inference by tao-deploy, I have used tao version 4.0 )
how to fix it ?
From above log, I think you are running with tao-deploy detetnet_v2
.
Please follow above-mentioned 4.0 user guide to generate etlt. DetectNet_v2 - NVIDIA Docs . Or you can use an alternative way.
docker run --runtime=nvidia -it --rm nvcr.io/nvidia/tao/tao-toolkit:4.0.1-tf1.15.5 /bin/bash
Then run detectnet_v2 export xxx
inside the docker in order to generate the etlt file.
Then, continue to use tao-deploy to generate engine and run inference as you expect.
for this model:
peoplenet resnet34 v2.0 export and then make inference,
which txt file is suitable for -e parameter?
detectnet_v2 export
-m $tlt_path
*-e $detectnet_v2_retrain_resnet18_kitti.txt *
-o $etlt_path
-k $“tlt_encode”
–gen_ds_config
It is spec file when you run training or retraining.
I want to run the model in inference mode, however this file
detectnet_v2_retrain_resnet18_kitti.txt
leads to correct results.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
The results are correct, right?
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.