Deepstream running reidentification model

Please provide complete information as applicable to your setup.

**• Hardware Platform = Jetson **
• DeepStream Version = 6.1.1
• JetPack Version (valid for Jetson only) = 5.0.2
• TensorRT Version=8.4
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions)

Hi i wanted to know if i use person detector as PGIE in deepstream pipeline will this model works as SGIE?
If yes then how do i convert this etlt/onnx model into trt engine.

I tried to convert both onnx and tlt models from this link https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/reidentificationnet/files but got the below error.

for onnx file error:

`Trying to create engine from model files
ERROR: [TRT]: 4: [network.cpp::validate::2671] Error Code 4: Internal Error (Network must have at least one output)
ERROR: Build engine failed from config file
ERROR: failed to build trt engine.
0:00:08.203561097 174027 0x28bace70 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 2]: build engine file failed
0:00:08.276716877 174027 0x28bace70 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2029> [UID = 2]: build backend context failed
0:00:08.276894798 174027 0x28bace70 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1266> [UID = 2]: generate backend failed, check config file settings
0:00:08.277196143 174027 0x28bace70 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:08.277265136 174027 0x28bace70 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start: error: Config file path: configs/reid_sgie1_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
0:00:08.277395441 174027 0x28bace70 WARN GST_PADS gstpad.c:1142:gst_pad_set_active:secondary1-nvinference-engine:sink Failed to activate pad

Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(846): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:secondary1-nvinference-engine:
Config file path: configs/reid_sgie1_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Exiting app`

for tlt model error:

Trying to create engine from model files parseModel: Failed to parse ONNX model ERROR: Failed to build network, error in model parsing. ERROR: Failed to create network using custom network creation function ERROR: Failed to get cuda engine from custom library API 0:00:10.612283938 716538 0x19e60670 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 2]: build engine file failed ERROR: [TRT]: 2: [logging.cpp::decRefCount::61] Error Code 2: Internal Error (Assertion mRefCount > 0 failed. ) corrupted size vs. prev_size while consolidating Aborted (core dumped)

SGIE config

[property]
gpu-id=0
net-scale-factor=1
tlt-model-key=nvidia_tlt
tlt-encoded-model=/home/sensormatic/Neo/deepstream_python_apps/apps/deepstream-ReID/models/resnet50_market1501.etlt
#onnx-file=/home/sensormatic/Neo/deepstream_python_apps/apps/deepstream-ReID/models/resnet50_market1501_aicity156.onnx
model-engine-file=/home/sensormatic/Neo/deepstream_python_apps/apps/deepstream-ReID/models/resnet50_market1501.etltl_b1_gpu0_fp16.engine
#labelfile-path=/home/sensormatic/Neo/deepstream_python_apps/apps/deepstream-ReID/models/labels.txt
#int8-calib-file=/opt/nvidia/deepstream/deepstream-6.1/samples/models/Secondary_CarColor/cal_trt.bin
force-implicit-batch-dim=0
batch-size=1
#0=FP32 and 1=INT8 2=FP16 mode
network-mode=2
#input-object-min-width=64
#input-object-min-height=64
process-mode=2
model-color-format=1
gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0
is-classifier=1
output-blob-names=output
classifier-async-mode=1
classifier-threshold=0.51
output-tensor-meta=1
#scaling-filter=0
#scaling-compute-hw=0

Could you refer to our demo: deepstream-mdx-perception-app.

1 Like

Does this supports deepstream-6.1.1

This demo was developed on DeepStream 6.2, which has better compatibility. You can try to update the Jetpack and DeepStream to the latest version first.

1 Like

okay let me try it.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.