TAO Toolkit Export tool - Exporting DS5.1 models to DS6.0

I have a question reg. below attached image.

If you want to bring models that were developed for DeepStream 5.1, You’ll need to create a new calibration cache files using TAO Toolkit Export tool. This is also true if you want to run models on DeepStream 5.1 developed with TAO Toolkit 21.11 as shown in the image below.

I have done the setup and I am able to run the Jupyter-notebook.

How to create new calibration cache files using TAO Toolkit Export tool??

In user guide, you can find exporting section in each object detection network or others.
For example,
https://docs.nvidia.com/tao/tao-toolkit/text/object_detection/yolo_v4.html#exporting-the-model

In Deepstream 5.1, We have .etlt model in production.
Using this .etlt can we generate the etlt for Deepstream 6.0 ??

For this https://docs.nvidia.com/tao/tao-toolkit/text/object_detection/yolo_v4.html#exporting-the-model
We should have the .tlt files.

The previous .etlt model can be used to deploy in DS6.0.

Thanks @Morganh

Can we build the deepstream-tao-apps (TRT-OSS and Post-processor) in deepstream-6.0-sample docker container???

I am getting error while make.

root@c79ec6aed68e:/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/deepstream_tao_apps# make
make -C post_processor
make[1]: Entering directory '/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/deepstream_tao_apps/post_processor'
g++ -o libnvds_infercustomparser_tao.so nvdsinfer_custombboxparser_tao.cpp -I/opt/nvidia/deepstream/deepstream-6.0/sources/includes -I/usr/local/cuda-11.4/include -Wall -std=c++11 -shared -fPIC -Wl,--start-group -lnvinfer -lnvparsers -L/usr/local/cuda-11.4/lib64 -lcudart -lcublas -Wl,--end-group
In file included from nvdsinfer_custombboxparser_tao.cpp:25:0:
/opt/nvidia/deepstream/deepstream-6.0/sources/includes/nvdsinfer_custom_impl.h:126:10: fatal error: NvCaffeParser.h: No such file or directory
 #include "NvCaffeParser.h"
          ^~~~~~~~~~~~~~~~~
compilation terminated.
Makefile:49: recipe for target 'libnvds_infercustomparser_tao.so' failed
make[1]: *** [libnvds_infercustomparser_tao.so] Error 1
make[1]: Leaving directory '/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/deepstream_tao_apps/post_processor'
Makefile:24: recipe for target 'all' failed
make: *** [all] Error 2

Should I try to setup everythig natively, outside the container??
TensorRT8.0.1
CUDA 11.4
Deepstream6.0
TRT-OSS , post-processor for tao apps

I was able to generate it in deepstream devel container.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.