Running yolo4v-tiny custom model

Starting pipeline

0:00:01.286246719 2967031 0x2dbd760 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out/yolo.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input 3x512x512
1 OUTPUT kFLOAT boxes 3840x1x4
2 OUTPUT kFLOAT confs 3840x6

ERROR: [TRT]: 3: Cannot find binding of given name: BatchedNMS
0:00:01.315672932 2967031 0x2dbd760 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1876> [UID = 1]: Could not find output layer ‘BatchedNMS’ in engine
0:00:01.315775074 2967031 0x2dbd760 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out/yolo.engine
0:00:01.321222578 2967031 0x2dbd760 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:yolo_config.txt sucessfully
Decodebin child added: source

Decodebin child added: decodebin0

Decodebin child added: rtph264depay0

Decodebin child added: h264parse0

Decodebin child added: capsfilter0

Decodebin child added: nvv4l2decoder0

In cb_newpad

gstname= video/x-raw
features= <Gst.CapsFeatures object at 0x7f540dc59820 (GstCapsFeatures at 0x7f5338040fa0)>
Segmentation fault (core dumped)

I am getting the following error “ERROR: [TRT]: 3: Cannot find binding of given name: BatchedNMS”
How do i solve this?

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

Hardware Platform (GPU)
DeepStream Version 6.1.1
TensorRT Version -8
NVIDIA GPU Driver Version (valid for GPU only)- 525.60.11

I am using a deepstream-rtsp-in-rtsp-out app by passing my custom yolov4-tiny configure file.
and I am getting the above error.

Did you set “BatchedNMS” output in your gie config, while there is not BatchedNMS layer in your model?

So then what could be my possible output layer.

So when I convert a TAO’s yolo model the output blobs are displayed, but here I have converted a custom yolo model to onnx, and then generated the engine file.
So what should my config file contain?

Any quick response will be of a greater help!!

This is your custom ONNX model, right?

As you can find in the DeepStream log, you can see the output layer info below.

0 INPUT kFLOAT input 3x512x512
1 OUTPUT kFLOAT boxes 3840x1x4
2 OUTPUT kFLOAT confs 3840x6

I want the output blob names!
Let me share my config file

boxes and confs are the output blob names of your model, have you checked your model?

[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=/opt/nvidia/deepstream/deepstream-6.1/class.txt
model-engine-file=/opt/nvidia/deepstream/deepstream-6.1/yolov4-tiny.engine
tlt-model-key=
infer-dims=3;512;512
maintain-aspect-ratio=1
uff-input-order=0
#uff-input-blob-name=Input
uff-input-blob-name=3x512x512
batch-size=1

0=FP32, 1=INT8, 2=FP16 mode

network-mode=2
num-detected-classes=6
interval=0
gie-unique-id=1
is-classifier=0
#network-type=0
#no cluster
cluster-mode=3
#output-blob-names=BatchedNMS
output-blob-names=3840x1x4
#parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
parse-bbox-func-name=NvDsInferParseCustomYoloV4
#parse-bbox-func-name=3840x1x4
#custom-lib-path=/opt/nvidia/deepstream/deepstream-6.1/deepstream_tao_apps/post_processor/libnvds_infercustomparser_tao.so
custom-lib-path=/opt/nvidia/deepstream/deepstream-6.1/deepstream_tao_apps/TRT-OSS/x86/TensorRT/build/libnvinfer_plugin.so.8
#engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
pre-cluster-threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0
#######################
This is my config file, can you tell me what is right, and wrong!

I need help with the following variables!

uff-input-order “?”
uff-input-blob-name “?”
parse-bbox-func-name “?”
custom-lib-path “Here which infer.so should I consider ?”

0:00:01.215098373 37 0x2321b60 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/yolov4-tiny.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input 3x512x512
1 OUTPUT kFLOAT boxes 3840x1x4
2 OUTPUT kFLOAT confs 3840x6

ERROR: [TRT]: 3: Cannot find binding of given name: 3840x1x4
0:00:01.240133006 37 0x2321b60 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1876> [UID = 1]: Could not find output layer ‘3840x1x4’ in engine
0:00:01.240149967 37 0x2321b60 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/yolov4-tiny.engine
0:00:01.251401287 37 0x2321b60 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initResource() <nvdsinfer_context_impl.cpp:778> [UID = 1]: Detect-postprocessor failed to init resource because dlsym failed to get func NvDsInferParseCustomYoloV4 pointer
ERROR: nvdsinfer_context_impl.cpp:1074 Infer Context failed to initialize post-processing resource, nvinfer error:NVDSINFER_CUSTOM_LIB_FAILED
ERROR: nvdsinfer_context_impl.cpp:1280 Infer Context prepare postprocessing resource failed., nvinfer error:NVDSINFER_CUSTOM_LIB_FAILED
0:00:01.257605903 37 0x2321b60 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:01.257627495 37 0x2321b60 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start: error: Config file path: yolo_config.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(846): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: yolo_config.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED

################
I am stuck with the above error, solution to this will be of a great help!

Hi @Ni_Fury
These parameters as explained below are all model-related, you need understand what these paramters are from my explaination below or from the DeepStream guide, then check your model and then decide what need to be set. Since I have no idea what your model is, I can’t help you find out what need be set for them.

uff-input-order: the input order (NCHW or NHWC) of your model
uff-input-blob-name : the input name of your network
parse-bbox-func-name: if your model needs custom post-processor

Thank you!

How to write a config file with multiple primary, and secondary models?