How to run TAO model in deepstream5.1

Hi,
I have a model which trained in Tao toolkit 3.0-22.05 .
What I want to do is to run this model in TLT v3.0
I found the way to run tlt v3.0 model in TAO environment using 'tao-converter ',
but opposite case is not.

Please let me know how to run TAO(especially yolo v3) model in tlt v3.0

Do you mean you are going to run inference in DS5.1 with yolov3 model which is trained with 22.05 TAO?
You can config your trained yolov3 .etlt model in deepstream5.1.

You mean it is possible to run that model in deepstream5.1 ?
Is there any need to convert ?
Please explain the way in detail.

You can refer to the guide in GitHub - NVIDIA-AI-IOT/deepstream_tao_apps at release/tao3.0 to run inference with yolov3 .etlt model.

Config the model in deepstream_tao_apps/pgie_yolov3_tao_config.txt at release/tao3.0 · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub

when I run the model in deepstream5.1(which I develop), below an error appeared.

ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API
0:00:00.485865021 59 0x22aea30 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1736> [UID = 1]: build engine file failed
Segmentation fault (core dumped)

In the pgie_yolov3_tao_config.txt , I found the line49
“custom-lib-path=…/…/post_processor/libnvds_infercustomparser_tao.so”
But I’m using
“custom-lib-path=yolov3/libnvds_infercustomparser_tlt.so”.

Is that the reason I got the following error?

Please change it and retry.

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Please git clone the banch release/tao3.0 and then follow steps to run again.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.