Please provide complete information as applicable to your setup.
• Hardware Platform Jetson • DeepStream Version 6.3 • JetPack Version (valid for Jetson only) 5.1.2 • TensorRT Version 8.5.2.2 • Issue Type (bugs)
**• How to reproduce the issue ? **
i am creating a yolov8n deepstream pipeline with 80 class and am having an issue using nvdspreprocess not returning the model detection results (this is happening when i add nvdspreprocess plugin to the pipeline, if i don’t include it the ai model runs on deepstream fine and returns values) the following files i have submitted include the path and file configuration i ran it in. i want to limit the detection area for the model when going through deepstream so need to use nvdspreprocess. I set input-tensor-from-meta=1, etc., but nvinfer has no output. If input-tensor-from-meta = 0, nvinfer has output. Below is my running environment : container: nvcr.io/nvidia/deepstream:6.3-triton-multiarch
**script(The .py file cannot be uploaded, so it is named. txt) pipeline.txt (9.8 KB) model(Uploading is not supported)
yolov8n → onnx → .engine nvdspreprocess config file config_preprocess.txt (733 Bytes) nvinfer config file yolov8s.txt (763 Bytes)
streammux:
width: 1920
height: 1080
I don’t know what’s going on as the pipeline is still running but there is no model value returned when I point at Probe, help me
config file (pipeline.txt) run pipeline with nvdspreprocess above, configured input-tensor-from-meta=1 to be able to take input as nvdspreprocess, but the problem here is that I don’t get any value from model when add nvdspreprocess plugin. Pipeline runs normally when I don’t add the nvdspreprocess plugin
Sorry for the long delay.I have tried the yolov8 model with deepstream-app in DeepStream-Yolo repository.
Here is my patch, It works fine.
diff --git a/config_infer_primary_yoloV8.txt b/config_infer_primary_yoloV8.txt
index c0c1311..a9a6155 100644
--- a/config_infer_primary_yoloV8.txt
+++ b/config_infer_primary_yoloV8.txt
@@ -2,8 +2,9 @@
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
-onnx-file=yolov8s.onnx
-model-engine-file=model_b1_gpu0_fp32.engine
+#onnx-file=yolov8s.onnx
+onnx-file=/root/DeepStream-Yolo/ultralytics/ultralytics/yolov8s.onnx
+model-engine-file=/root/DeepStream-Yolo/model_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
@@ -19,7 +20,7 @@ symmetric-padding=1
#workspace-size=2000
parse-bbox-func-name=NvDsInferParseYolo
#parse-bbox-func-name=NvDsInferParseYoloCuda
-custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
+custom-lib-path=/root/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
[class-attrs-all]
diff --git a/config_preprocess.txt b/config_preprocess.txt
new file mode 100644
index 0000000..47d2eb2
--- /dev/null
+++ b/config_preprocess.txt
@@ -0,0 +1,79 @@
+################################################################################
+# SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: LicenseRef-NvidiaProprietary
+#
+# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
+# property and proprietary rights in and to this material, related
+# documentation and any modifications thereto. Any use, reproduction,
+# disclosure or distribution of this material and related documentation
+# without an express license agreement from NVIDIA CORPORATION or
+# its affiliates is strictly prohibited.
+################################################################################
+
+# The values in the config file are overridden by values set through GObject
+# properties.
+
+[property]
+enable=1
+ # list of component gie-id for which tensor is prepared
+target-unique-ids=1
+ # 0=NCHW, 1=NHWC, 2=CUSTOM
root@ipp1-2189:~/DeepStream-Yolo# git diff --cached > out.patch
root@ipp1-2189:~/DeepStream-Yolo# cat out.
cat: out.: No such file or directory
root@ipp1-2189:~/DeepStream-Yolo# cat out.patch
diff --git a/config_infer_primary_yoloV8.txt b/config_infer_primary_yoloV8.txt
index c0c1311..a9a6155 100644
--- a/config_infer_primary_yoloV8.txt
+++ b/config_infer_primary_yoloV8.txt
@@ -2,8 +2,9 @@
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
-onnx-file=yolov8s.onnx
-model-engine-file=model_b1_gpu0_fp32.engine
+#onnx-file=yolov8s.onnx
+onnx-file=/root/DeepStream-Yolo/ultralytics/ultralytics/yolov8s.onnx
+model-engine-file=/root/DeepStream-Yolo/model_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
@@ -19,7 +20,7 @@ symmetric-padding=1
#workspace-size=2000
parse-bbox-func-name=NvDsInferParseYolo
#parse-bbox-func-name=NvDsInferParseYoloCuda
-custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
+custom-lib-path=/root/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
[class-attrs-all]
diff --git a/config_preprocess.txt b/config_preprocess.txt
new file mode 100644
index 0000000..47d2eb2
--- /dev/null
+++ b/config_preprocess.txt
@@ -0,0 +1,79 @@
+################################################################################
+# SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: LicenseRef-NvidiaProprietary
+#
+# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
+# property and proprietary rights in and to this material, related
+# documentation and any modifications thereto. Any use, reproduction,
+# disclosure or distribution of this material and related documentation
+# without an express license agreement from NVIDIA CORPORATION or
+# its affiliates is strictly prohibited.
+################################################################################
+
+# The values in the config file are overridden by values set through GObject
+# properties.
+
+[property]
+enable=1
+ # list of component gie-id for which tensor is prepared
+target-unique-ids=1
+ # 0=NCHW, 1=NHWC, 2=CUSTOM
+network-input-order=0
+ # 0=process on objects 1=process on frames
+process-on-frame=1
+ #uniquely identify the metadata generated by this element
+unique-id=5
+ # gpu-id to be used
+gpu-id=0
+ # if enabled maintain the aspect ratio while scaling
+maintain-aspect-ratio=1
+ # if enabled pad symmetrically with maintain-aspect-ratio enabled
+symmetric-padding=1
+ # processig width/height at which image scaled
+processing-width=640
+processing-height=640
+ # max buffer in scaling buffer pool
+scaling-buf-pool-size=6
+ # max buffer in tensor buffer pool
+tensor-buf-pool-size=6
+ # tensor shape based on network-input-order
+#network-input-shape= 8;3;544;960
+network-input-shape=1;3;640;640
+ # 0=RGB, 1=BGR, 2=GRAY
+network-color-format=0
+ # 0=FP32, 1=UINT8, 2=INT8, 3=UINT32, 4=INT32, 5=FP16
+tensor-data-type=0
+ # tensor name same as input layer name
+# tensor-name=input_1
+tensor-name=input
+ # 0=NVBUF_MEM_DEFAULT 1=NVBUF_MEM_CUDA_PINNED 2=NVBUF_MEM_CUDA_DEVICE 3=NVBUF_MEM_CUDA_UNIFIED
+scaling-pool-memory-type=0
+ # 0=NvBufSurfTransformCompute_Default 1=NvBufSurfTransformCompute_GPU 2=NvBufSurfTransformCompute_VIC
+scaling-pool-compute-hw=0
+ # Scaling Interpolation method
+ # 0=NvBufSurfTransformInter_Nearest 1=NvBufSurfTransformInter_Bilinear 2=NvBufSurfTransformInter_Algo1
+ # 3=NvBufSurfTransformInter_Algo2 4=NvBufSurfTransformInter_Algo3 5=NvBufSurfTransformInter_Algo4
+ # 6=NvBufSurfTransformInter_Default
+scaling-filter=0
+ # custom library .so path having custom functionality
+custom-lib-path=/opt/nvidia/deepstream/deepstream/lib/gst-plugins/libcustom2d_preprocess.so
+ # custom tensor preparation function name having predefined input/outputs
+ # check the default custom library nvdspreprocess_lib for more info
+custom-tensor-preparation-function=CustomTensorPreparation
+
+[user-configs]
+ # Below parameters get used when using default custom library nvdspreprocess_lib
+ # network scaling factor
+pixel-normalization-factor=0.003921568
+ # mean file path in ppm format
+#mean-file=
+ # array of offsets for each channel
+#offsets=
+
+[group-0]
+src-ids=0
+custom-input-transformation-function=CustomAsyncTransformation
+process-on-roi=1
+roi-params-src-0=300;200;700;800;1300;300;600;700
+roi-params-src-1=860;300;900;500;50;300;500;700
+
diff --git a/deepstream_app_config.txt b/deepstream_app_config.txt
index 8c6822f..1918697 100644
--- a/deepstream_app_config.txt
+++ b/deepstream_app_config.txt
@@ -20,12 +20,31 @@ gpu-id=0
cudadec-memtype=0
[sink0]
-enable=1
+enable=0
type=2
sync=0
gpu-id=0
nvbuf-memory-type=0
+[sink1]
+enable=1
+type=3
+#1=mp4 2=mkv
+container=1
+#1=h264 2=h265
+codec=3
+#encoder type 0=Hardware 1=Software
+enc-type=0
+sync=0
+bitrate=20000
+#H264 Profile - 0=Baseline 2=Main 4=High
+#H265 Profile - 0=Main 1=Main10
+# set profile only for hw encoder, sw encoder selects profile based on sw-preset
+profile=0
+output-file=out.mp4
+source-id=0
+gpu-id=0
+
[osd]
enable=1
gpu-id=0
@@ -51,12 +70,17 @@ height=1080
enable-padding=0
nvbuf-memory-type=0
+[pre-process]
+enable=1
+config-file=config_preprocess.txt
+
[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
-config-file=config_infer_primary.txt
+input-tensor-meta=1
+config-file=config_infer_primary_yoloV8.txt
[tests]
file-loop=0
i tried yolov8 model with deepstream-app on jetson Orin NX 8GB jetpack 5.1.2 with deepstream 6.3:
I tried running the deepstream app with the video source with only primary-gie, the screen showed bbox as in figure 1
When I added the nvdspreprocess section to the deepstream-app and added input-tensor-meta=1 to primary-gie, the screen only showed the area that needed preprocessing, but it seemed like the bbox was not shown and the result returned None as in figure 2, it seemed like the preprocessing section had run like the pipeline did not run through the model, please help me
Running files:
deepstream-app: deepstream_app.txt (922 Bytes)
config preprocess: config_preprocess.txt (854 Bytes)
yolov8 model: config_yolov8s.txt (688 Bytes)
figure 1 (running only the model):
I have run the model and it shows as in image 1, built on the repo you sent, I just changed the name and move lib deepstream6.3.
The question is why it can’t run on the original source example of docker images and the way to run it above.
Can you try it in your environment with my above configurations?
Since I don’t have a Jetson with JP-5.1 installed, I used the dGPU test the yolov8n.onnx model with your config file and nvcr.io/nvidia/deepstream:6.3-triton-multiarch and it works fine.
Try to modify the following configuration items in config_preprocess.txt.
A little strange
1.Can you try upgrading to DS-7.0? I am currently unable to debug this issue on DS-6.3 due to lack of corresponding versions.
2.Dose the deepstream-preprocess-test work normally ?
I built it and I don’t know what the problem was, but I rebuilt the plugin and cp the .so file to /opt/nvidia/deepstream/deepstream/libs/ and it worked.