No detections on deepstream (nvinfer) with nvdspreprocess plugin add

Please provide complete information as applicable to your setup.

• Hardware Platform Jetson
• DeepStream Version 6.3
• JetPack Version (valid for Jetson only) 5.1.2
• TensorRT Version 8.5.2.2
• Issue Type (bugs)
**• How to reproduce the issue ? **
i am creating a yolov8n deepstream pipeline with 80 class and am having an issue using nvdspreprocess not returning the model detection results (this is happening when i add nvdspreprocess plugin to the pipeline, if i don’t include it the ai model runs on deepstream fine and returns values) the following files i have submitted include the path and file configuration i ran it in. i want to limit the detection area for the model when going through deepstream so need to use nvdspreprocess. I set input-tensor-from-meta=1, etc., but nvinfer has no output. If input-tensor-from-meta = 0, nvinfer has output.
Below is my running environment :
container: nvcr.io/nvidia/deepstream:6.3-triton-multiarch
**script(The .py file cannot be uploaded, so it is named. txt)
pipeline.txt (9.8 KB)
model(Uploading is not supported)
yolov8n → onnx → .engine
nvdspreprocess config file
config_preprocess.txt (733 Bytes)
nvinfer config file
yolov8s.txt (763 Bytes)

streammux:
width: 1920
height: 1080
I don’t know what’s going on as the pipeline is still running but there is no model value returned when I point at Probe, help me

I think this is as expected.

If you set input-tensor-from-meta to 1, then nvinfer will use the tensor output by nvdsprocess instead of the image as input.

For your needs, specify the ROI in nvdsprocess configuration file, and then set input-tensor-from-meta 0 is enough

config file (pipeline.txt) run pipeline with nvdspreprocess above, configured input-tensor-from-meta=1 to be able to take input as nvdspreprocess, but the problem here is that I don’t get any value from model when add nvdspreprocess plugin. Pipeline runs normally when I don’t add the nvdspreprocess plugin

Sorry for the long delay.I have tried the yolov8 model with deepstream-app in DeepStream-Yolo repository.

Here is my patch, It works fine.

diff --git a/config_infer_primary_yoloV8.txt b/config_infer_primary_yoloV8.txt
index c0c1311..a9a6155 100644
--- a/config_infer_primary_yoloV8.txt
+++ b/config_infer_primary_yoloV8.txt
@@ -2,8 +2,9 @@
 gpu-id=0
 net-scale-factor=0.0039215697906911373
 model-color-format=0
-onnx-file=yolov8s.onnx
-model-engine-file=model_b1_gpu0_fp32.engine
+#onnx-file=yolov8s.onnx
+onnx-file=/root/DeepStream-Yolo/ultralytics/ultralytics/yolov8s.onnx
+model-engine-file=/root/DeepStream-Yolo/model_b1_gpu0_fp32.engine
 #int8-calib-file=calib.table
 labelfile-path=labels.txt
 batch-size=1
@@ -19,7 +20,7 @@ symmetric-padding=1
 #workspace-size=2000
 parse-bbox-func-name=NvDsInferParseYolo
 #parse-bbox-func-name=NvDsInferParseYoloCuda
-custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
+custom-lib-path=/root/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
 engine-create-func-name=NvDsInferYoloCudaEngineGet
 
 [class-attrs-all]
diff --git a/config_preprocess.txt b/config_preprocess.txt
new file mode 100644
index 0000000..47d2eb2
--- /dev/null
+++ b/config_preprocess.txt
@@ -0,0 +1,79 @@
+################################################################################
+# SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: LicenseRef-NvidiaProprietary
+#
+# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
+# property and proprietary rights in and to this material, related
+# documentation and any modifications thereto. Any use, reproduction,
+# disclosure or distribution of this material and related documentation
+# without an express license agreement from NVIDIA CORPORATION or
+# its affiliates is strictly prohibited.
+################################################################################
+
+# The values in the config file are overridden by values set through GObject
+# properties.
+
+[property]
+enable=1
+    # list of component gie-id for which tensor is prepared
+target-unique-ids=1
+    # 0=NCHW, 1=NHWC, 2=CUSTOM
root@ipp1-2189:~/DeepStream-Yolo# git diff --cached > out.patch
root@ipp1-2189:~/DeepStream-Yolo# cat out.
cat: out.: No such file or directory
root@ipp1-2189:~/DeepStream-Yolo# cat out.patch 
diff --git a/config_infer_primary_yoloV8.txt b/config_infer_primary_yoloV8.txt
index c0c1311..a9a6155 100644
--- a/config_infer_primary_yoloV8.txt
+++ b/config_infer_primary_yoloV8.txt
@@ -2,8 +2,9 @@
 gpu-id=0
 net-scale-factor=0.0039215697906911373
 model-color-format=0
-onnx-file=yolov8s.onnx
-model-engine-file=model_b1_gpu0_fp32.engine
+#onnx-file=yolov8s.onnx
+onnx-file=/root/DeepStream-Yolo/ultralytics/ultralytics/yolov8s.onnx
+model-engine-file=/root/DeepStream-Yolo/model_b1_gpu0_fp32.engine
 #int8-calib-file=calib.table
 labelfile-path=labels.txt
 batch-size=1
@@ -19,7 +20,7 @@ symmetric-padding=1
 #workspace-size=2000
 parse-bbox-func-name=NvDsInferParseYolo
 #parse-bbox-func-name=NvDsInferParseYoloCuda
-custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
+custom-lib-path=/root/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
 engine-create-func-name=NvDsInferYoloCudaEngineGet
 
 [class-attrs-all]
diff --git a/config_preprocess.txt b/config_preprocess.txt
new file mode 100644
index 0000000..47d2eb2
--- /dev/null
+++ b/config_preprocess.txt
@@ -0,0 +1,79 @@
+################################################################################
+# SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: LicenseRef-NvidiaProprietary
+#
+# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
+# property and proprietary rights in and to this material, related
+# documentation and any modifications thereto. Any use, reproduction,
+# disclosure or distribution of this material and related documentation
+# without an express license agreement from NVIDIA CORPORATION or
+# its affiliates is strictly prohibited.
+################################################################################
+
+# The values in the config file are overridden by values set through GObject
+# properties.
+
+[property]
+enable=1
+    # list of component gie-id for which tensor is prepared
+target-unique-ids=1
+    # 0=NCHW, 1=NHWC, 2=CUSTOM
+network-input-order=0
+    # 0=process on objects 1=process on frames
+process-on-frame=1
+    #uniquely identify the metadata generated by this element
+unique-id=5
+    # gpu-id to be used
+gpu-id=0
+    # if enabled maintain the aspect ratio while scaling
+maintain-aspect-ratio=1
+    # if enabled pad symmetrically with maintain-aspect-ratio enabled
+symmetric-padding=1
+    # processig width/height at which image scaled
+processing-width=640
+processing-height=640
+    # max buffer in scaling buffer pool
+scaling-buf-pool-size=6
+    # max buffer in tensor buffer pool
+tensor-buf-pool-size=6
+    # tensor shape based on network-input-order
+#network-input-shape= 8;3;544;960
+network-input-shape=1;3;640;640
+    # 0=RGB, 1=BGR, 2=GRAY
+network-color-format=0
+    # 0=FP32, 1=UINT8, 2=INT8, 3=UINT32, 4=INT32, 5=FP16
+tensor-data-type=0
+    # tensor name same as input layer name
+# tensor-name=input_1
+tensor-name=input
+    # 0=NVBUF_MEM_DEFAULT 1=NVBUF_MEM_CUDA_PINNED 2=NVBUF_MEM_CUDA_DEVICE 3=NVBUF_MEM_CUDA_UNIFIED
+scaling-pool-memory-type=0
+    # 0=NvBufSurfTransformCompute_Default 1=NvBufSurfTransformCompute_GPU 2=NvBufSurfTransformCompute_VIC
+scaling-pool-compute-hw=0
+    # Scaling Interpolation method
+    # 0=NvBufSurfTransformInter_Nearest 1=NvBufSurfTransformInter_Bilinear 2=NvBufSurfTransformInter_Algo1
+    # 3=NvBufSurfTransformInter_Algo2 4=NvBufSurfTransformInter_Algo3 5=NvBufSurfTransformInter_Algo4
+    # 6=NvBufSurfTransformInter_Default
+scaling-filter=0
+    # custom library .so path having custom functionality
+custom-lib-path=/opt/nvidia/deepstream/deepstream/lib/gst-plugins/libcustom2d_preprocess.so
+    # custom tensor preparation function name having predefined input/outputs
+    # check the default custom library nvdspreprocess_lib for more info
+custom-tensor-preparation-function=CustomTensorPreparation
+
+[user-configs]
+   # Below parameters get used when using default custom library nvdspreprocess_lib
+   # network scaling factor
+pixel-normalization-factor=0.003921568
+   # mean file path in ppm format
+#mean-file=
+   # array of offsets for each channel
+#offsets=
+
+[group-0]
+src-ids=0
+custom-input-transformation-function=CustomAsyncTransformation
+process-on-roi=1
+roi-params-src-0=300;200;700;800;1300;300;600;700
+roi-params-src-1=860;300;900;500;50;300;500;700
+
diff --git a/deepstream_app_config.txt b/deepstream_app_config.txt
index 8c6822f..1918697 100644
--- a/deepstream_app_config.txt
+++ b/deepstream_app_config.txt
@@ -20,12 +20,31 @@ gpu-id=0
 cudadec-memtype=0
 
 [sink0]
-enable=1
+enable=0
 type=2
 sync=0
 gpu-id=0
 nvbuf-memory-type=0
 
+[sink1]
+enable=1
+type=3
+#1=mp4 2=mkv
+container=1
+#1=h264 2=h265
+codec=3
+#encoder type 0=Hardware 1=Software
+enc-type=0
+sync=0
+bitrate=20000
+#H264 Profile - 0=Baseline 2=Main 4=High
+#H265 Profile - 0=Main 1=Main10
+# set profile only for hw encoder, sw encoder selects profile based on sw-preset
+profile=0
+output-file=out.mp4
+source-id=0
+gpu-id=0
+
 [osd]
 enable=1
 gpu-id=0
@@ -51,12 +70,17 @@ height=1080
 enable-padding=0
 nvbuf-memory-type=0
 
+[pre-process]
+enable=1
+config-file=config_preprocess.txt
+
 [primary-gie]
 enable=1
 gpu-id=0
 gie-unique-id=1
 nvbuf-memory-type=0
-config-file=config_infer_primary.txt
+input-tensor-meta=1
+config-file=config_infer_primary_yoloV8.txt
 
 [tests]
 file-loop=0

After apply this above patch.

Run this command line. I got this result.

deepstream-app -c deepstream_app_config.txt 


Please check your configurtaion file

I will try and will respond to you as soon as possible