How to use custom RTSP model in deep stream imagedata multistream

Creating Pipeline

Creating streamux

Creating source_bin 0

Creating source bin
source-bin-00
Creating source_bin 1

Creating source bin
source-bin-01
Creating source_bin 2

Creating source bin
source-bin-02
Creating source_bin 3

Creating source bin
source-bin-03
Creating Pgie

Creating nvvidconv1

Creating filter1

Creating tiler

Creating nvvidconv

Creating nvosd

Creating transform

Creating EGLSink

Atleast one of the sources is live
Adding elements to Pipeline

Linking elements in the Pipeline

Now playing…
1 : rtsp://admin:Fire1000@192.168.1.105/Streaming/Channels/103
2 : rtsp://admin:Fire1000@192.168.1.103/Streaming/Channels/103
3 : rtsp://admin:Fire1000@192.168.1.102/Streaming/Channels/103
4 : rtsp://admin:Fire1000@192.168.1.106/Streaming/Channels/103
Starting pipeline

Using winsys: x11
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:05.580961936 23882 0x3861b860 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/dong.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT data 3x640x640
1 OUTPUT kFLOAT prob 6001x1x1

0:00:05.581170325 23882 0x3861b860 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1833> [UID = 1]: Backend has maxBatchSize 1 whereas 4 has been requested
0:00:05.581213590 23882 0x3861b860 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: deserialized backend context :/opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/dong.engine failed to match config params, trying rebuild
0:00:05.619096375 23882 0x3861b860 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
ERROR: failed to build network since there is no model file matched.
ERROR: failed to build network.
0:00:05.620211087 23882 0x3861b860 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:05.620294609 23882 0x3861b860 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:05.620332914 23882 0x3861b860 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:05.621322022 23882 0x3861b860 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:05.621380680 23882 0x3861b860 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Config file path: dstest_multi.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: dstest_multi.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Exiting app

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

1.jetson NX
2.Deepstream6.0
3.JetPack6.1
4.TensorRT8.0.1
5.How do I work in deepstream_ python_ Deep stream imagedata multistream in apps uses yolov5 model of custom training?

I use a custom yolov5s model in the configuration file and report the above error

Your model has only one output layer, is this correct? Please refer to Gst-nvinfer — DeepStream 6.3 Release documentation to fill the correct parameters of nvinfer config.

I have written according to the document, but there may be some problems I don’t understand. Can you tell me what’s wrong? The custom engine I use wants to carry out multi-channel RTSP output. The following is dstest in the deepstream imagedata multistream folder I modified_ imagedata_ config. txt content:
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#model-file=/opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel
#proto-file=/opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.prototxt
model-engine-file=/opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/dong.engine
labelfile-path=/opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/labels.txt
int8-calib-file=/opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/cal_trt.bin
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
parse-bbox-func-name=NvDsInferParseCustomYoloV5
force-implicit-batch-dim=1
batch-size=16
process-mode=1
model-color-format=0
network-mode=0
network-type=1
num-detected-classes=2
interval=0
gie-unique-id=2
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid

0=Group Rectangles, 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)

cluster-mode=4

[streammux]
gpu-id=0
live-source=1
batch-size=16
batched-push-timeout=1
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary.txt
[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.7
minBoxes=1

#Use the config params below for dbscan clustering mode
[class-attrs-all]
detected-min-w=2
detected-min-h=2
minBoxes=1

Per class configurations

[class-attrs-0]
eps=0.7
dbscan-min-score=0.95

[class-attrs-1]
eps=0.7
dbscan-min-score=0.5

How did you get the engine?

Have you implement the “NvDsInferParseCustomYoloV5” by yourself?

I don’t think we have such sample or folder in our DeepStream SDK. Are you working on deepstream-image-meta-test sample? If so, there is no [streammux] or [primary-gie] group in the nvinfer config file. Please read the document carefully. Gst-nvinfer — DeepStream 6.3 Release documentation

Most parameters are related to the model, so you must fill the parameters according to your own model.

Thank you for your answer. I have solved the problem

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.