No detections on deepstream (nvinfer) with nvdspreprocess plugin add

Please provide complete information as applicable to your setup.

• Hardware Platform Jetson
• DeepStream Version 6.3
• JetPack Version (valid for Jetson only) 5.1.2
• TensorRT Version 8.5.2.2
• Issue Type (bugs)
**• How to reproduce the issue ? **
i am creating a yolov8n deepstream pipeline with 80 class and am having an issue using nvdspreprocess not returning the model detection results (this is happening when i add nvdspreprocess plugin to the pipeline, if i don’t include it the ai model runs on deepstream fine and returns values) the following files i have submitted include the path and file configuration i ran it in. i want to limit the detection area for the model when going through deepstream so need to use nvdspreprocess. I set input-tensor-from-meta=1, etc., but nvinfer has no output. If input-tensor-from-meta = 0, nvinfer has output.
Below is my running environment :
container: nvcr.io/nvidia/deepstream:6.3-triton-multiarch
**script(The .py file cannot be uploaded, so it is named. txt)
pipeline.txt (9.8 KB)
model(Uploading is not supported)
yolov8n → onnx → .engine
nvdspreprocess config file
config_preprocess.txt (733 Bytes)
nvinfer config file
yolov8s.txt (763 Bytes)

streammux:
width: 1920
height: 1080
I don’t know what’s going on as the pipeline is still running but there is no model value returned when I point at Probe, help me

I think this is as expected.

If you set input-tensor-from-meta to 1, then nvinfer will use the tensor output by nvdsprocess instead of the image as input.

For your needs, specify the ROI in nvdsprocess configuration file, and then set input-tensor-from-meta 0 is enough

config file (pipeline.txt) run pipeline with nvdspreprocess above, configured input-tensor-from-meta=1 to be able to take input as nvdspreprocess, but the problem here is that I don’t get any value from model when add nvdspreprocess plugin. Pipeline runs normally when I don’t add the nvdspreprocess plugin

Sorry for the long delay.I have tried the yolov8 model with deepstream-app in DeepStream-Yolo repository.

Here is my patch, It works fine.

diff --git a/config_infer_primary_yoloV8.txt b/config_infer_primary_yoloV8.txt
index c0c1311..a9a6155 100644
--- a/config_infer_primary_yoloV8.txt
+++ b/config_infer_primary_yoloV8.txt
@@ -2,8 +2,9 @@
 gpu-id=0
 net-scale-factor=0.0039215697906911373
 model-color-format=0
-onnx-file=yolov8s.onnx
-model-engine-file=model_b1_gpu0_fp32.engine
+#onnx-file=yolov8s.onnx
+onnx-file=/root/DeepStream-Yolo/ultralytics/ultralytics/yolov8s.onnx
+model-engine-file=/root/DeepStream-Yolo/model_b1_gpu0_fp32.engine
 #int8-calib-file=calib.table
 labelfile-path=labels.txt
 batch-size=1
@@ -19,7 +20,7 @@ symmetric-padding=1
 #workspace-size=2000
 parse-bbox-func-name=NvDsInferParseYolo
 #parse-bbox-func-name=NvDsInferParseYoloCuda
-custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
+custom-lib-path=/root/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
 engine-create-func-name=NvDsInferYoloCudaEngineGet
 
 [class-attrs-all]
diff --git a/config_preprocess.txt b/config_preprocess.txt
new file mode 100644
index 0000000..47d2eb2
--- /dev/null
+++ b/config_preprocess.txt
@@ -0,0 +1,79 @@
+################################################################################
+# SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: LicenseRef-NvidiaProprietary
+#
+# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
+# property and proprietary rights in and to this material, related
+# documentation and any modifications thereto. Any use, reproduction,
+# disclosure or distribution of this material and related documentation
+# without an express license agreement from NVIDIA CORPORATION or
+# its affiliates is strictly prohibited.
+################################################################################
+
+# The values in the config file are overridden by values set through GObject
+# properties.
+
+[property]
+enable=1
+    # list of component gie-id for which tensor is prepared
+target-unique-ids=1
+    # 0=NCHW, 1=NHWC, 2=CUSTOM
root@ipp1-2189:~/DeepStream-Yolo# git diff --cached > out.patch
root@ipp1-2189:~/DeepStream-Yolo# cat out.
cat: out.: No such file or directory
root@ipp1-2189:~/DeepStream-Yolo# cat out.patch 
diff --git a/config_infer_primary_yoloV8.txt b/config_infer_primary_yoloV8.txt
index c0c1311..a9a6155 100644
--- a/config_infer_primary_yoloV8.txt
+++ b/config_infer_primary_yoloV8.txt
@@ -2,8 +2,9 @@
 gpu-id=0
 net-scale-factor=0.0039215697906911373
 model-color-format=0
-onnx-file=yolov8s.onnx
-model-engine-file=model_b1_gpu0_fp32.engine
+#onnx-file=yolov8s.onnx
+onnx-file=/root/DeepStream-Yolo/ultralytics/ultralytics/yolov8s.onnx
+model-engine-file=/root/DeepStream-Yolo/model_b1_gpu0_fp32.engine
 #int8-calib-file=calib.table
 labelfile-path=labels.txt
 batch-size=1
@@ -19,7 +20,7 @@ symmetric-padding=1
 #workspace-size=2000
 parse-bbox-func-name=NvDsInferParseYolo
 #parse-bbox-func-name=NvDsInferParseYoloCuda
-custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
+custom-lib-path=/root/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
 engine-create-func-name=NvDsInferYoloCudaEngineGet
 
 [class-attrs-all]
diff --git a/config_preprocess.txt b/config_preprocess.txt
new file mode 100644
index 0000000..47d2eb2
--- /dev/null
+++ b/config_preprocess.txt
@@ -0,0 +1,79 @@
+################################################################################
+# SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: LicenseRef-NvidiaProprietary
+#
+# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
+# property and proprietary rights in and to this material, related
+# documentation and any modifications thereto. Any use, reproduction,
+# disclosure or distribution of this material and related documentation
+# without an express license agreement from NVIDIA CORPORATION or
+# its affiliates is strictly prohibited.
+################################################################################
+
+# The values in the config file are overridden by values set through GObject
+# properties.
+
+[property]
+enable=1
+    # list of component gie-id for which tensor is prepared
+target-unique-ids=1
+    # 0=NCHW, 1=NHWC, 2=CUSTOM
+network-input-order=0
+    # 0=process on objects 1=process on frames
+process-on-frame=1
+    #uniquely identify the metadata generated by this element
+unique-id=5
+    # gpu-id to be used
+gpu-id=0
+    # if enabled maintain the aspect ratio while scaling
+maintain-aspect-ratio=1
+    # if enabled pad symmetrically with maintain-aspect-ratio enabled
+symmetric-padding=1
+    # processig width/height at which image scaled
+processing-width=640
+processing-height=640
+    # max buffer in scaling buffer pool
+scaling-buf-pool-size=6
+    # max buffer in tensor buffer pool
+tensor-buf-pool-size=6
+    # tensor shape based on network-input-order
+#network-input-shape= 8;3;544;960
+network-input-shape=1;3;640;640
+    # 0=RGB, 1=BGR, 2=GRAY
+network-color-format=0
+    # 0=FP32, 1=UINT8, 2=INT8, 3=UINT32, 4=INT32, 5=FP16
+tensor-data-type=0
+    # tensor name same as input layer name
+# tensor-name=input_1
+tensor-name=input
+    # 0=NVBUF_MEM_DEFAULT 1=NVBUF_MEM_CUDA_PINNED 2=NVBUF_MEM_CUDA_DEVICE 3=NVBUF_MEM_CUDA_UNIFIED
+scaling-pool-memory-type=0
+    # 0=NvBufSurfTransformCompute_Default 1=NvBufSurfTransformCompute_GPU 2=NvBufSurfTransformCompute_VIC
+scaling-pool-compute-hw=0
+    # Scaling Interpolation method
+    # 0=NvBufSurfTransformInter_Nearest 1=NvBufSurfTransformInter_Bilinear 2=NvBufSurfTransformInter_Algo1
+    # 3=NvBufSurfTransformInter_Algo2 4=NvBufSurfTransformInter_Algo3 5=NvBufSurfTransformInter_Algo4
+    # 6=NvBufSurfTransformInter_Default
+scaling-filter=0
+    # custom library .so path having custom functionality
+custom-lib-path=/opt/nvidia/deepstream/deepstream/lib/gst-plugins/libcustom2d_preprocess.so
+    # custom tensor preparation function name having predefined input/outputs
+    # check the default custom library nvdspreprocess_lib for more info
+custom-tensor-preparation-function=CustomTensorPreparation
+
+[user-configs]
+   # Below parameters get used when using default custom library nvdspreprocess_lib
+   # network scaling factor
+pixel-normalization-factor=0.003921568
+   # mean file path in ppm format
+#mean-file=
+   # array of offsets for each channel
+#offsets=
+
+[group-0]
+src-ids=0
+custom-input-transformation-function=CustomAsyncTransformation
+process-on-roi=1
+roi-params-src-0=300;200;700;800;1300;300;600;700
+roi-params-src-1=860;300;900;500;50;300;500;700
+
diff --git a/deepstream_app_config.txt b/deepstream_app_config.txt
index 8c6822f..1918697 100644
--- a/deepstream_app_config.txt
+++ b/deepstream_app_config.txt
@@ -20,12 +20,31 @@ gpu-id=0
 cudadec-memtype=0
 
 [sink0]
-enable=1
+enable=0
 type=2
 sync=0
 gpu-id=0
 nvbuf-memory-type=0
 
+[sink1]
+enable=1
+type=3
+#1=mp4 2=mkv
+container=1
+#1=h264 2=h265
+codec=3
+#encoder type 0=Hardware 1=Software
+enc-type=0
+sync=0
+bitrate=20000
+#H264 Profile - 0=Baseline 2=Main 4=High
+#H265 Profile - 0=Main 1=Main10
+# set profile only for hw encoder, sw encoder selects profile based on sw-preset
+profile=0
+output-file=out.mp4
+source-id=0
+gpu-id=0
+
 [osd]
 enable=1
 gpu-id=0
@@ -51,12 +70,17 @@ height=1080
 enable-padding=0
 nvbuf-memory-type=0
 
+[pre-process]
+enable=1
+config-file=config_preprocess.txt
+
 [primary-gie]
 enable=1
 gpu-id=0
 gie-unique-id=1
 nvbuf-memory-type=0
-config-file=config_infer_primary.txt
+input-tensor-meta=1
+config-file=config_infer_primary_yoloV8.txt
 
 [tests]
 file-loop=0

After apply this above patch.

Run this command line. I got this result.

deepstream-app -c deepstream_app_config.txt 


Please check your configurtaion file

I will try and will respond to you as soon as possible

i tried yolov8 model with deepstream-app on jetson Orin NX 8GB jetpack 5.1.2 with deepstream 6.3:

  1. I tried running the deepstream app with the video source with only primary-gie, the screen showed bbox as in figure 1
  2. When I added the nvdspreprocess section to the deepstream-app and added input-tensor-meta=1 to primary-gie, the screen only showed the area that needed preprocessing, but it seemed like the bbox was not shown and the result returned None as in figure 2, it seemed like the preprocessing section had run like the pipeline did not run through the model, please help me
    Running files:
    deepstream-app:
    deepstream_app.txt (922 Bytes)
    config preprocess:
    config_preprocess.txt (854 Bytes)
    yolov8 model:
    config_yolov8s.txt (688 Bytes)
    figure 1 (running only the model):

    figure 2 (with preprocess added to the pipeline):

i also tried running this command: “gst-launch-1.0 filesrc location = /opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! qtdemux ! \h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch -size=1 width=1920 height=1080 ! \ nvdspreprocess config-file= /opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvdspreprocess/config_preprocess.txt ! /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt \ input-tensor-meta=1 batch-size=2 ! nvmultistreamtiler width=1920 height=1080 ! nvvideoconvert ! nvdsosd ! nv3dsink”
In topic: Nvdspreprocess deepstream “Gst-nvdspreprocess (Alpha) — DeepStream documentation 6.4 documentation
but also gives The result is similar to figure 2.

I think you can’t use that postprocessing library for yolov8n. They don’t match, it depends on your model

custom-lib-path=/opt/nvidia/deepstream/deepstream-6.3/lib/libnvds_infercustomparser_yolo.so

Please use the post-processing library in this project, GitHub - marcoslucianops/DeepStream-Yolo: NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models

I have run the model and it shows as in image 1, built on the repo you sent, I just changed the name and move lib deepstream6.3.
The question is why it can’t run on the original source example of docker images and the way to run it above.
Can you try it in your environment with my above configurations?

Using your configuration file, I can get the correct result. I use the nvcr.io/nvidia/deepstream:deepstream:7.0-triton-multiarch image.

Can you share your onnx model of yolov8n?

yes, i used this repo and converted to onnx of deepstream yolo: DeepStream-Yolo/docs/YOLOv8.md at master · marcoslucianops/DeepStream-Yolo · GitHub
yolov8n.zip (10.5 MB)
here, you unzip

i use nvcr.io/nvidia/deepstream:6.3-triton-multiarch jetson Orin NX 8GB, jetpack 5.1.2

Since I don’t have a Jetson with JP-5.1 installed, I used the dGPU test the yolov8n.onnx model with your config file and nvcr.io/nvidia/deepstream:6.3-triton-multiarch and it works fine.

Try to modify the following configuration items in config_preprocess.txt.

scaling-pool-memory-type=2
scaling-pool-compute-hw=1

I tried and the screen just shows blue, no results.

A little strange
1.Can you try upgrading to DS-7.0? I am currently unable to debug this issue on DS-6.3 due to lack of corresponding versions.
2.Dose the deepstream-preprocess-test work normally ?

/opt/nvidia/deepstream/deepstream-7.0/sources/apps/sample_apps/deepstream-preprocess-test

I built it and I don’t know what the problem was, but I rebuilt the plugin and cp the .so file to /opt/nvidia/deepstream/deepstream/libs/ and it worked.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.