Deepstream secondary gie trouble running

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson)
**• DeepStream Version 6.3
**• JetPack Version - # R35 (release), REVISION: 5.0, GCID: 35550185, BOARD: t186ref, EABI: aarch64, DATE: Tue Feb 20 04:46:31 UTC 2024

**• TensorRT Version

glueck@ubuntu:~$ dpkg -l | grep TensorRT
ii graphsurgeon-tf 8.5.2-1+cuda11.4 arm64 GraphSurgeon for TensorRT package
ii libnvinfer-bin 8.5.2-1+cuda11.4 arm64 TensorRT binaries
ii libnvinfer-dev 8.5.2-1+cuda11.4 arm64 TensorRT development libraries and headers
ii libnvinfer-plugin-dev 8.5.2-1+cuda11.4 arm64 TensorRT plugin libraries
ii libnvinfer-plugin8 8.5.2-1+cuda11.4 arm64 TensorRT plugin libraries
ii libnvinfer-samples 8.5.2-1+cuda11.4 all TensorRT samples
ii libnvinfer8 8.5.2-1+cuda11.4 arm64 TensorRT runtime libraries
ii libnvonnxparsers-dev 8.5.2-1+cuda11.4 arm64 TensorRT ONNX libraries
ii libnvonnxparsers8 8.5.2-1+cuda11.4 arm64 TensorRT ONNX libraries
ii libnvparsers-dev 8.5.2-1+cuda11.4 arm64 TensorRT parsers libraries
ii libnvparsers8 8.5.2-1+cuda11.4 arm64 TensorRT parsers libraries
ii onnx-graphsurgeon 8.5.2-1+cuda11.4 arm64 ONNX GraphSurgeon for TensorRT package
ii python3-libnvinfer 8.5.2-1+cuda11.4 arm64 Python 3 bindings for TensorRT
ii python3-libnvinfer-dev 8.5.2-1+cuda11.4 arm64 Python 3 development package for TensorRT
ii tensorrt 8.5.2.2-1+cuda11.4 arm64 Meta package for TensorRT
ii tensorrt-libs 8.5.2.2-1+cuda11.4 arm64 Meta package for TensorRT runtime libraries
ii uff-converter-tf 8.5.2-1+cuda11.4 arm64 UFF converter for TensorRT package

• NVIDIA GPU Driver Version (valid for GPU only)
glueck@ubuntu:~$ cat /proc/driver/nvidia/version
NVRM version: NVIDIA UNIX Open Kernel Module for aarch64 35.5.0 Release Build (buildbrain@mobile-u64-6519-d7000) Mon Feb 19 20:34:12 PST 2024
GCC version: gcc version 9.3.0 (Buildroot 2020.08)

glueck@ubuntu:/opt/nvidia/deepstream/deepstream-6.3/samples/configs/tao_pretrained_models/deepstream_reference_apps/deepstream_app_tao_configs$ sudo deepstream-app -c deepstream_app_source1_peoplenet.txt
** WARN: <parse_tracker:1604>: Unknown key ‘enable-batch-process’ for group [tracker]
** WARN: <parse_tracker:1604>: Unknown key ‘enable-past-frame’ for group [tracker]
Warning: ‘input-dims’ parameter has been deprecated. Use ‘infer-dims’ instead.
ERROR: [TRT]: 1: [stdArchiveReader.cpp::StdArchiveReader::32] Error Code 1: Serialization (Serialization assertion magicTagRead == kMAGIC_TAG failed.Magic tag does not match)
ERROR: [TRT]: 4: [runtime.cpp::deserializeCudaEngine::65] Error Code 4: Internal Error (Engine deserialization failed.)
ERROR: Deserialize engine failed from file: /home/glueck/Downloads/pcp-gce-pcp-gce-v1.3@2c41f1f8df8/assets/glueck-ce/glueck-ce-services/model/gender-model-caffe-v2.0/gender.caffemodel
0:00:05.662980820 7478 0xaaaad6b7bd00 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1976> [UID = 2]: deserialize engine from file :/home/glueck/Downloads/pcp-gce-pcp-gce-v1.3@2c41f1f8df8/assets/glueck-ce/glueck-ce-services/model/gender-model-caffe-v2.0/gender.caffemodel failed
0:00:05.866090648 7478 0xaaaad6b7bd00 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2081> [UID = 2]: deserialize backend context from engine from file :/home/glueck/Downloads/pcp-gce-pcp-gce-v1.3@2c41f1f8df8/assets/glueck-ce/glueck-ce-services/model/gender-model-caffe-v2.0/gender.caffemodel failed, try rebuild
0:00:05.866219162 7478 0xaaaad6b7bd00 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 2]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
ERROR: Could not find output layer ‘prob’
ERROR: failed to build network since parsing model errors.
ERROR: failed to build network.
0:00:08.194638220 7478 0xaaaad6b7bd00 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2022> [UID = 2]: build engine file failed
0:00:08.404800203 7478 0xaaaad6b7bd00 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2108> [UID = 2]: build backend context failed
0:00:08.404868461 7478 0xaaaad6b7bd00 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1282> [UID = 2]: generate backend failed, check config file settings
0:00:08.404928974 7478 0xaaaad6b7bd00 WARN nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<secondary_gie_0> error: Failed to create NvDsInferContext instance
0:00:08.404947054 7478 0xaaaad6b7bd00 WARN nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<secondary_gie_0> error: Config file path: /home/glueck/Downloads/gce-deepstream-master@d60d0d171d5/gce-deepstream/configs/tlt_pretrained_models/config_infer_gender_classifier.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: main:716: Failed to set pipeline to PAUSED
Quitting
nvstreammux: Successfully handled EOS for source_id=0
ERROR from secondary_gie_0: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(898): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:secondary_gie_bin/GstNvInfer:secondary_gie_0:
Config file path: /home/glueck/Downloads/gce-deepstream-master@d60d0d171d5/gce-deepstream/configs/tlt_pretrained_models/config_infer_gender_classifier.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed

config_infer_gender_classifier.txt (3.8 KB)

deepstream_app_source1_peoplenet.txt (5.9 KB)

the secondary gie is not loading the model and running, can i know what is the error, ive given the config files

Please follow the steps in deepstream_reference_apps/deepstream_app_tao_configs at master · NVIDIA-AI-IOT/deepstream_reference_apps (github.com)

i followed the same steps as it is, the other secondary gie working fine just one of it is not working which is the gender

Please check the paltform compatibility Quickstart Guide — DeepStream 6.3 Release documentation.
Did you install by SDKManager?

yes by sdk manager

Why did such “/home/glueck/Downloads/pcp-gce-pcp-gce-v1.3@2c41f1f8df8/assets/glueck-ce/glueck-ce-services/model/gender-model-caffe-v2.0/” directory appear? peoplenet model is not in such directory.

its our own trained model

So the issue is that DeepStream failed to parse and build TensorRT engine for your customized model. Do you have TensorRT sample for your model? We don’t know whether your nvinfer configuration file is correct or not without any information about your model.

Your configuration "
model-engine-file=/home/glueck/Downloads/pcp-gce-pcp-gce-v1.3@2c41f1f8df8/assets/glueck-ce/glueck-ce-services/model/gender-model-caffe-v2.0/gender.caffemodel" in the config_infer_gender_classifier.txt is wrong. “model-engine-file” should be the TensorRT engine file. If you don’t have it. Do not configure it or just configure it to the engine file name and directory you want to generate.

Seems your model’s output layer name is not “prob” while you configure “output-blob-names=prob” in config_infer_gender_classifier.txt. Please configure the correct name.

Please read Gst-nvinfer — DeepStream 6.3 Release documentation and get the parameters you need.

~/Downloads/gce-deepstream-master@d60d0d171d5/gce-deepstream/models/genderv11$ ls
deploy.prototxt labels.txt mean.yaml settings.xml
gender11.caffemodel mean.binaryproto meanyamlgen train_val.prototxt

this is what we got for this model

wont it create model engine file for the first time

This is the caffe model. You have set it to “model-file”.

Yes. It will if you give a writable directory and engine file name.

so the engine file we should left it blank or link it to the model folder

Yes. Leave the engine file as blank or the engine file you want to generate.

t
** WARN: <parse_tracker:1604>: Unknown key ‘enable-batch-process’ for group [tracker]
** WARN: <parse_tracker:1604>: Unknown key ‘enable-past-frame’ for group [tracker]
Warning: ‘input-dims’ parameter has been deprecated. Use ‘infer-dims’ instead.
0:00:00.316327326 5778 0xaaaaeec6dcf0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 2]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
ERROR: Could not find output layer ‘prob’
ERROR: failed to build network since parsing model errors.
ERROR: failed to build network.
0:00:03.448217704 5778 0xaaaaeec6dcf0 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2022> [UID = 2]: build engine file failed
0:00:03.648396303 5778 0xaaaaeec6dcf0 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2108> [UID = 2]: build backend context failed
0:00:03.648467313 5778 0xaaaaeec6dcf0 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1282> [UID = 2]: generate backend failed, check config file settings
0:00:03.648538643 5778 0xaaaaeec6dcf0 WARN nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<secondary_gie_0> error: Failed to create NvDsInferContext instance
0:00:03.648556788 5778 0xaaaaeec6dcf0 WARN nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<secondary_gie_0> error: Config file path: /home/glueck/Downloads/gce-deepstream-master@d60d0d171d5/gce-deepstream/configs/tlt_pretrained_models/config_infer_gender_classifier.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: main:716: Failed to set pipeline to PAUSED
Quitting
nvstreammux: Successfully handled EOS for source_id=0
ERROR from secondary_gie_0: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(898): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:secondary_gie_bin/GstNvInfer:secondary_gie_0:
Config file path: /home/glueck/Downloads/gce-deepstream-master@d60d0d171d5/gce-deepstream/configs/tlt_pretrained_models/config_infer_gender_classifier.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed

the error i got

Please refer to Deepstream secondary gie trouble running - #11 by Fiona.Chen

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.