skim1
December 14, 2021, 11:03am
1
I have a question reg. below attached image.
If you want to bring models that were developed for DeepStream 5.1, You’ll need to create a new calibration cache files using TAO Toolkit Export tool. This is also true if you want to run models on DeepStream 5.1 developed with TAO Toolkit 21.11 as shown in the image below.
I have done the setup and I am able to run the Jupyter-notebook.
How to create new calibration cache files using TAO Toolkit Export tool??
Morganh
December 14, 2021, 5:32pm
2
In user guide, you can find exporting section in each object detection network or others.
For example,
https://docs.nvidia.com/tao/tao-toolkit/text/object_detection/yolo_v4.html#exporting-the-model
skim1
December 15, 2021, 10:02am
3
In Deepstream 5.1, We have .etlt model in production.
Using this .etlt can we generate the etlt for Deepstream 6.0 ??
For this https://docs.nvidia.com/tao/tao-toolkit/text/object_detection/yolo_v4.html#exporting-the-model
We should have the .tlt files.
Morganh
December 15, 2021, 10:41am
4
The previous .etlt model can be used to deploy in DS6.0.
skim1
December 16, 2021, 3:37am
5
Thanks @Morganh
Can we build the deepstream-tao-apps (TRT-OSS and Post-processor) in deepstream-6.0-sample docker container???
I am getting error while make.
root@c79ec6aed68e:/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/deepstream_tao_apps# make
make -C post_processor
make[1]: Entering directory '/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/deepstream_tao_apps/post_processor'
g++ -o libnvds_infercustomparser_tao.so nvdsinfer_custombboxparser_tao.cpp -I/opt/nvidia/deepstream/deepstream-6.0/sources/includes -I/usr/local/cuda-11.4/include -Wall -std=c++11 -shared -fPIC -Wl,--start-group -lnvinfer -lnvparsers -L/usr/local/cuda-11.4/lib64 -lcudart -lcublas -Wl,--end-group
In file included from nvdsinfer_custombboxparser_tao.cpp:25:0:
/opt/nvidia/deepstream/deepstream-6.0/sources/includes/nvdsinfer_custom_impl.h:126:10: fatal error: NvCaffeParser.h: No such file or directory
#include "NvCaffeParser.h"
^~~~~~~~~~~~~~~~~
compilation terminated.
Makefile:49: recipe for target 'libnvds_infercustomparser_tao.so' failed
make[1]: *** [libnvds_infercustomparser_tao.so] Error 1
make[1]: Leaving directory '/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream_python_apps/deepstream_tao_apps/post_processor'
Makefile:24: recipe for target 'all' failed
make: *** [all] Error 2
skim1
December 16, 2021, 5:57am
6
Should I try to setup everythig natively, outside the container??
TensorRT8.0.1
CUDA 11.4
Deepstream6.0
TRT-OSS , post-processor for tao apps
skim1
December 16, 2021, 7:44am
7
I was able to generate it in deepstream devel container.
1 Like
system
Closed
December 30, 2021, 7:45am
8
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.