• Hardware Platform (Jetson / GPU) - GeForce RTX 3090
• DeepStream Version - 5.1
• TensorRT Version - 7.2.2 (libnvinfer_plugin.so built from github 21.02 tag)
• NVIDIA GPU Driver Version (valid for GPU only) - 460.32.03
• Issue Type( questions, new requirements, bugs) - Bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
I am experiencing a number of issues running the TLT pre-trained models. I have attached a .tar.gz file can be used to reproduce my issues. The .tar.gz file contains:
- docker-compose.yaml, defines docker container built from Dockerfile, mounts in ./models and ./configs directories
- Dockerfile, builds environment using nvcr.io/nvidia/deepstream:5.1-21.02-devel. The Dockerfile builds cmake v3.13.5, then builds libnvinfer_plugin.so from the nvidia/TensorRT github repo 21.02 tag, it then builds libnvds_infercustomparser_tlt.so from release/tlt3.0 branch of the NVIDIA-AI-IOT/deepstream_tlt_apps github repo.
- configs directory, contains config files which have been modified from the “tlt_pretrained_models” configs included with deepstream, incorrect paths have been fixed, config_infer_primary_*.txt files have been modified to use libnvds_infercustomparser_tlt.so, deepstream_app_source1_detection_*.txt files have been added for each model based on deepstream_app_source1_detection_models.txt.
Ensure nvidia-container-runtime is installed on the host machine and configured as the default docker runtime, host machine is running Ubuntu 18.04. Extract the attached archive. The TLT models must be then downloaded and extracted into the models directory, the models are downloaded from this link: https://nvidia.box.com/shared/static/i1cer4s3ox4v8svbfkuj5js8yqm3yazo.zip found on this page: Transfer Learning Toolkit (TLT) Integration with DeepStream — DeepStream 5.1 Release documentation
Run the “./start.sh” script to build and run the container. Run each of the deepstream_app_source1_*.txt config files with “deepstream-app -c filename.txt”.
Every model/config has issues, below I list the issues that I observe with each config:
deepstream_app_source1_detection_dssd.txt: works but misses a lot of detections and labels are wrong
NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: UffParser: Could not read buffer. parseModel: Failed to parse UFF model ERROR: tlt/tlt_decode.cpp:274 failed to build network since parsing model errors. ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API 0:00:01.095817111 91 0x55d362a84e00 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1736> [UID = 1]: build engine file failed Segmentation fault (core dumped)
deepstream_app_source1_peoplenet.txt: same errors as deepstream_app_source1_detection_frcnn.txt
deepstream_app_source1_detection_retinanet.txt runs for a few seconds and then this:
deepstream-app: nvdsinfer_custombboxparser_tlt.cpp:81: bool NvDsInferParseCustomNMSTLT(const std::vector<NvDsInferLayerInfo>&, const NvDsInferNetworkInfo&, const NvDsInferParseDetectionParams&, std::vector<NvDsInferObjectDetectionInfo>&): Assertion `(int) det < out_class_size' failed. Aborted (core dumped)
deepstream_app_source1_detection_ssd.txt: same error as deepstream_app_source1_detection_retinanet.txt
deepstream_app_source1_detection_yolov3.txt: makes detections but behaves like a classification model instead of outputting detection boxes
deepstream_app_source1_detection_yolov4.txt: makes detections but behaves like a classification model instead of outputting detection boxes
I would appreciate any help in solving these issues. Let me know if you need any further information.
nvidia-tlt.tar.gz (8.1 KB)