How to apply the different nvdspreprocess configs for back-to-back detector

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)GPU
**• DeepStream Version 6.0

I would like to use two pgies in my application with different network input size. The problem is that it seems that a single nvdspreprocess cannot crop the frame as the both input for these two pgie. Now, my idea is use two nvdspreprocess like this: ->nvdspreprocess0->pgie0->nvdspreprocess1->pgie1. Is it could work and whether the usermata caused by nvdspreprocess0 will be covered by the nvdspreprocess1? I am looking forward to your help!

Kind Regards

Yes. The pipeline is OK.

You need to set correct “target-unique-ids” for nvdspreprocess
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvdspreprocess.html

It is very interesting to use the following command:

gst-launch-1.0 filesrc location = /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_1080p_h264.mp4 ! qtdemux ! \
h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvvideoconvert ! \
nvdspreprocess config-file= /opt/nvidia/deepstream/deepstream-6.0/sources/gst-plugins/gst-nvdspreprocess/config_preprocess.txt  ! \
nvinfer config-file-path= /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/config_infer_primary.txt \
nvdspreprocess config-file= /opt/nvidia/deepstream/deepstream-6.0/sources/gst-plugins/gst-nvdspreprocess/config_preprocess.txt  ! \
nvinfer config-file-path= /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/config_infer_primary.txt \
input-tensor-meta=1 batch-size=7  ! nvmultistreamtiler width=1920 height=1080 ! nvvideoconvert ! nvdsosd ! nveglglessink

And it told me “Error:Batch-size in network-input-shape should be at least sum total of ROIs”. It seems because a global variable “sum_total_rois”.

nvdspreprocess should be use together with nvinfer.
You do not set “input-tensor-meta=1” with the first nvinfer, so the first preprocess can not provide input to the first nvinfer.

You set “batch-size=7” with the second nvinfer, but there are only 4 ROIs in the second nvdspreprocess config file, so the pipeline will fail.

Please set correct parameters with nvdspreprocess and nvinfer.

The same error:

gst-launch-1.0 filesrc location = /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvvideoconvert ! nvdspreprocess config-file= /opt/nvidia/deepstream/deepstream-6.0/sources/gst-plugins/gst-nvdspreprocess/config_preprocess.txt ! nvinfer config-file-path= /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/config_infer_primary.txt input-tensor-meta=1 batch-size=4 nvdspreprocess config-file= /opt/nvidia/deepstream/deepstream-6.0/sources/gst-plugins/gst-nvdspreprocess/config_preprocess1.txt ! nvinfer config-file-path= /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/config_infer_primary1.txt input-tensor-meta=1 batch-size=4 ! nvmultistreamtiler width=1920 height=1080 ! nvvideoconvert ! nvdsosd ! nveglglessink
ERROR: Batch-size in network-input-shape should be atleast Sum Total of ROIs
设置暂停管道 …
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1484 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/…/…/models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine open error
0:00:00.462249123 14782 0x55d7f1de3100 WARN nvinfer gstnvinfer.cpp:637:gst_nvinfer_logger: NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 2]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/…/…/models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine failed
0:00:00.462283551 14782 0x55d7f1de3100 WARN nvinfer gstnvinfer.cpp:637:gst_nvinfer_logger: NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 2]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/…/…/models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine failed, try rebuild
0:00:00.462290936 14782 0x55d7f1de3100 INFO nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 2]: Trying to create engine from model files
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1456 Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine opened error
0:00:17.847964486 14782 0x55d7f1de3100 WARN nvinfer gstnvinfer.cpp:637:gst_nvinfer_logger: NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1942> [UID = 2]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:00:17.850871862 14782 0x55d7f1de3100 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 2]: Load new model:/opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/config_infer_primary1.txt sucessfully
错误: 管道不想暂停。
从组件“eglglessink0”获取上下文:gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
错误:来自组件 /GstPipeline:pipeline0/GstNvDsPreProcess:nvdspreprocess1:Configuration file parsing failed
额外的调试信息:
gstnvdspreprocess.cpp(374): gst_nvdspreprocess_start (): /GstPipeline:pipeline0/GstNvDsPreProcess:nvdspreprocess1:
Config file path: /opt/nvidia/deepstream/deepstream-6.0/sources/gst-plugins/gst-nvdspreprocess/config_preprocess1.txt
设置 NULL 管道 …
释放管道资源 …

by the way, batch size=7 is from here https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvdspreprocess.html

The checking is not correct. Please use the attached nvdspreprocess source code gstnvdspreprocess.h (10.1 KB)
nvdspreprocess_property_parser.cpp (27.2 KB) to build the nvdspreprocess plugin, and your pipeline could be
config_preprocess1.txt (3.0 KB)
config_preprocess2.txt (3.0 KB)

config_infer_primary1.txt (4.2 KB)
config_infer_primary2.txt (4.2 KB)

gst-launch-1.0 filesrc location = /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_1080p_h264.mp4 ! qtdemux ! \
h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvvideoconvert ! \
nvdspreprocess config-file=config_preprocess1.txt  ! \
nvinfer config-file-path=config_infer_primary1.txt input-tensor-meta=1 \
nvdspreprocess config-file=config_preprocess2.txt  ! \
nvinfer config-file-path=config_infer_primary2.txt \
input-tensor-meta=1 batch-size=7  ! nvmultistreamtiler width=1920 height=1080 ! nvvideoconvert ! nvdsosd ! nveglglessink 

To avoid confusion, your command lost a “!”:

gst-launch-1.0 filesrc location = /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_1080p_h264.mp4 ! qtdemux ! \
h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvvideoconvert ! \
nvdspreprocess config-file=config_preprocess1.txt  ! \
nvinfer config-file-path=config_infer_primary1.txt input-tensor-meta=1   !\
nvdspreprocess config-file=config_preprocess2.txt  ! \
nvinfer config-file-path=config_infer_primary2.txt  \
input-tensor-meta=1 batch-size=7  ! nvmultistreamtiler width=1920 height=1080 ! nvvideoconvert ! nvdsosd ! nveglglessink

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.