And I tried to use root
Error: no input dimensions given
root@ubuntu:/home/nvidia/Downloads/files# ./tao-converter /home/nvidia/Downloads/files/bodyposenet_deployable_v1.0.1/bpnet_model.deploy.etlt -k nvidia_tlt -p input_1:0,1x288x384x3,4x288x384x3,16x288x384x3 -t fp16 -m 16 -e trt.engine
Error: no input dimensions given
I will check on my orin and will give you update soon.
No issue when use tao-converter to generate tensort engine on my side.
$ wget ‘https://api.ngc.nvidia.com/v2/models/nvidia/tao/bodyposenet/versions/deployable_v1.0.1/files/model.etlt’ (according to BodyPoseNet | NVIDIA NGC)
$ wget --content-disposition ‘https://api.ngc.nvidia.com/v2/resources/nvidia/tao/tao-converter/versions/v3.22.05_trt8.4_aarch64/files/tao-converter’ (according to TAO Converter | NVIDIA NGC)
$ chmod +x tao-converter
$ ./tao-converter model.etlt -k nvidia_tlt -p input_1:0,1x288x384x3,4x288x384x3,16x288x384x3 -t fp16 -m 16 -e trt.engine
Yours can run, thx. I tried my bpnet_model.etlt and it has created the engine. :>
And now, I got the trt.engine, how to use it in deepstream? (I am looking at this link, but it only said about the pre-trained models, Integrating TAO Models into DeepStream — TAO Toolkit 3.22.05 documentation)
And can I convert it to onnx or something so that I can use the model in dusty inference? Thx
Btw, in my picture, it has engine file already, does it mean I don’t need to run tao-converter? Thx
The reason why I need bpnet because I want to use dusty-nv inference.
And we want to train the pose model for better result.
Btw, I have created a forum, How to use the trt engine in dusty nv posenet?
The default 1.4.0 notebook should not contain the .engine files.
You can wget it again to double check.
And for running bpnet in deepstream, please follow GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream
BTW, make sure you download the models according to GitHub - NVIDIA-AI-IOT/deepstream_tao_apps at release/tao3.0_ds6.1ga
And also please use similar command as below to run. I can run successfully in my Orin.
$ ./deepstream-bodypose2d-app 1 …/…/…/configs/bodypose2d_tao/sample_bodypose2d_model_config.txt 0 0 file:///home/nvidia/morgan/bpnet/deepstream_tao_apps_master/apps/tao_others/deepstream-bodypose2d-app/original-image.png ./body2dout
Yes, I saw the body2out.jpg, it is same as yours. thx.
I am still trying to figure out how to use the trt.engine in the deepstream…
The link, GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream
only shows how to run it, but it does not say how to add my newly created trt.engine…
And how to run it real time? I have a c920 webcam
Modify this line.
Thanks. it is using my engine now.
I only have two questions left
- How to run the deepstream using webcam and rtsp? Still looking for the tutorial link…
- How to convert the engine into onnx?
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
For deepstream_tao_app, its current version does not support rtsp yet. If you want to implement it, you can refer to The LPD and LPR models in the TAO tool do not work well - #22 by Morganh for reference.
No, cannot convert .etlt model to onnx file.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.