Hi,
I have a model which trained in Tao toolkit 3.0-22.05 .
What I want to do is to run this model in TLT v3.0
I found the way to run tlt v3.0 model in TAO environment using 'tao-converter ',
but opposite case is not.
Please let me know how to run TAO(especially yolo v3) model in tlt v3.0
Do you mean you are going to run inference in DS5.1 with yolov3 model which is trained with 22.05 TAO?
You can config your trained yolov3 .etlt model in deepstream5.1.
when I run the model in deepstream5.1(which I develop), below an error appeared.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API
0:00:00.485865021 59 0x22aea30 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1736> [UID = 1]: build engine file failed
Segmentation fault (core dumped)
In the pgie_yolov3_tao_config.txt , I found the line49
“custom-lib-path=…/…/post_processor/libnvds_infercustomparser_tao.so”
But I’m using
“custom-lib-path=yolov3/libnvds_infercustomparser_tlt.so”.
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks
Please git clone the banch release/tao3.0 and then follow steps to run again.