Not able to run Tao trained model in Deepstream pipeline

Following are the configuration I am using for my experiment
• Hardware (A10)
• Network Type (Yolo_v4)
• TLT Version (nvcr.io/nvidia/tao/tao-toolkit:4.0.0-tf1.15.5)
TRT and Cuda on Tao Docker (8.5.1.7-1+cuda11.8 ) vs TRT and Cuda version on Deepstream (8.0.1-1+cuda11.3) , So downgraded on Tao docker to 8.0.1-1+cuda11.3

Hi,

I was experimenting with Tao toolkit and found few issues such as engine files generated from Tao converter (Yolo_v4) is not running in our DeepStream deployment pipeline. I tried bringing TRT version on Tao toolkit environment same as DeepStream environment but that also produced same issue.

Please help !!!

You can refer to our demo: deepstream_tao_apps. The project includes the deployment of Yolov4

You can copy the .etlt file to your inference platform(i think it is one deepstream docker).
Then git clone GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream and config your etlt model like https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/blob/master/configs/yolov4_tao/pgie_yolov4_tao_config.txt#L31-L32 .

Hi @Morganh,

can you share link where I can get TRT 8.0.1-1+cuda11.3 version? I am not getting here : https://developer.nvidia.com/tensorrt

Do you mean Tensorrt Docker for x86-based?

Issue is not running in DeepStream. It’s TRT version mismatch I believe that’s causing an issue while running in DeepStream where model initiation is failing giving below output :

[NvMultiObjectTracker] Initialized

0:01:00.604651633 3662 0x2cd6000 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5

ERROR: [TRT]: 1: [stdArchiveReader.cpp::StdArchiveReader::34] Error Code 1: Serialization (Serialization assertion safeVersionRead == safeSerializationVersion failed.Version tag does not match. Note: Current Version: 43, Serialized Engine Version: 0)

ERROR: [TRT]: 4: [runtime.cpp::deserializeCudaEngine::75] Error Code 4: Internal Error (Engine deserialization failed.)

ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1528 Deserialize engine failed from file: /JioVMD/march14_trt.engine

0:01:01.723663305 3662 0x2cd6000 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/JioVMD/march14_trt.engine failed

0:01:01.723737062 3662 0x2cd6000 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/JioVMD/march14_trt.engine failed, try rebuild

0:01:01.723771046 3662 0x2cd6000 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files

ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:861 failed to build network since there is no model file matched.

ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:799 failed to build network.

0:01:01.725331049 3662 0x2cd6000 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed

0:01:01.725368269 3662 0x2cd6000 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed

0:01:01.725393035 3662 0x2cd6000 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings

0:01:01.726229351 3662 0x2cd6000 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Failed to create NvDsInferContext instance

0:01:01.726246153 3662 0x2cd6000 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Config file path: pgie.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

[NvMultiObjectTracker] De-initialized

Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:

Config file path: pgie.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

Exiting app

As mentioned above, you can docker pull a deepstream docker.
docker pull nvcr.io/nvidia/deepstream:6.1.1-devel

Then, login the docker,
$ docker run --runtime=nvidia -it --rm nvcr.io/nvidia/deepstream:6.1.1-devel
/bin/bash

Git clone GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream

git clone https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps.git

Build this repo according to the readme.

Copy .etlt model and config it.
For yolov4, please refer to deepstream_tao_apps/pgie_yolov4_tao_config.txt at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub

Run inference with this repo.
Refer to GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream

ds-tao-detection -c pgie_config_file -i <H264 or JPEG file uri> [-b BATCH] [-d] [-f] [-l]

No for amd only but I need TRT 8.0.1-1 which I am not getting on official download website.

You can pull nvcr.io/nvidia/tensorrt:21.08-py3

More info can be found in TensorRT | NVIDIA NGC

How will I know which tags are associated with TRT 8.0.1-1 in following NGC hub ??

TensorRT | NVIDIA NGC

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

You can click “layers” to check.
You can also login docker nvcr.io/nvidia/tensorrt:21.08-py3 to run
$ dpkg -l |grep cuda

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.