Enabling NvdsPreprocess leads to no detections from PGIE in ROI

Please provide complete information as applicable to your setup.

• Hardware Platform GPU
• DeepStream Version 6.2
• TensorRT Version 8.5.2.2
• NVIDIA GPU Driver Version 525.125.06
• Issue Type( questions, new requirements, bugs) After enabling the NvdsPreprocess plugin in the pipeline, PGIE is not producing any detections in or outside the ROI.
• How to reproduce the issue ?

After configuring config_preprocess.txt according to my model params and enabling input-tensor-meta for PGIE, I do get the ROI drawn on the video, however, no predictions are received from the PGIE model.
I use yolov7 640x640 model, based on deepstream logs the input layer name of the model is “input”, so I changed the tensor-name value to “input”:
image

I copied the rest of preprocessing/model related configs from config_infer_primary.txt to config_preprocess.txt.
Here is my config.txt:

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=10

[tiled-display]
enable=0
rows=1
columns=1
width=1400
height=500
gpu-id=0
nvbuf-memory-type=0


[pre-process]
enable=1
config-file=config_preprocess.txt


[source0]
enable=1
type=3
uri=file://../sample_video/test.mp4
num-sources=1
gpu-id=0
cudadec-memtype=0
rtsp-reconnect-interval-sec=30


[sink0]
enable=1
type=3
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0
container=1
codec=1
enc-type=1
profile=0
output-file=output.mp4
bitrate=40000000

[osd]
enable=1
gpu-id=0
border-width=2
text-size=20
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
live-source=0
batch-size=1
batched-push-timeout=40000
#width=2888
#height=962
width=3840
height=2160
enable-padding=0
nvbuf-memory-type=0
attach-sys-ts-as-ntp=1

[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary.txt
input-tensor-meta=1

[tests]
file-loop=0

Here is config_primary.txt:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
model-engine-file=../yolo/checkpoint/model_b1_gpu0_fp16.engine
labelfile-path=../yolo/labels.txt
onnx-file=../yolo/checkpoint/best_reparametrized.onnx
scaling-filter=1
batch-size=1
network-mode=2
num-detected-classes=1
interval=0
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=../yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
output-tensor-meta=1


[class-attrs-all]
nms-iou-threshold=0.4
pre-cluster-threshold=0.2
topk=300

And here is config_preprocess.txt:

[property]
enable=1
target-unique-ids=1
network-input-order=0
process-on-frame=1
unique-id=5
gpu-id=0
maintain-aspect-ratio=1
symmetric-padding=1
processing-width=640
processing-height=640
scaling-buf-pool-size=6
tensor-buf-pool-size=6
network-input-shape=1;3;640;640
network-color-format=0
tensor-data-type=0
tensor-name=input
scaling-pool-memory-type=0
scaling-pool-compute-hw=0
scaling-filter=1
custom-lib-path=/opt/nvidia/deepstream/deepstream-6.2/lib/gst-plugins/libcustom2d_preprocess.so
custom-tensor-preparation-function=CustomTensorPreparation

[user-configs]
pixel-normalization-factor=0.003921568

[group-0]
src-ids=0
custom-input-transformation-function=CustomAsyncTransformation
process-on-roi=1
draw-roi=1
roi-params-src-0=914;770;2888;962

Any ideas on what I might have missed? Thanks.

Please refer to /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-preprocess-test to check whether your configuration is proper.

I get the following output when launching the deepstream-proprocess-test app:
image
This doesn’t help much, is there an app where I could see the video output with ROIs and detections in them? How do I know if there are any detections?

To answer your question, I compared my configuration with deepstream-proprocess-test and I couldn’t find any differences. Any help?

Thanks.

The log seems wrong. Please tell us how you ran the sample app? Have you read /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-preprocess-test/README?

Yes, I’ve read the README and followed the steps. Here’s what I did:

Do xhost +, then:

  1. Enter docker container with docker run --gpus all -it --network=host --name testing --runtime=nvidia -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/deepstream:6.2-devel bash
  2. cd sources/apps/sample_apps/deepstream-preprocess-test
  3. export CUDA_VER=11.8
  4. make && make install
  5. cp ../../../../samples/streams/sample_1080p_h264.mp4 .
  6. ./deepstream-preprocess-test config_preprocess.txt config_infer.txt sample_1080p_h264.mp4

I get this:

If before running the container I don’t do xhost +, I get the output above (in my previous post).
By the way, I was able to successfully launch the default deepstream-app with tiled display of 30 sources.
Any more tips?
Thanks

No. The command is wrong. URI is needed.

./deepstream-preprocess-test config_preprocess.txt config_infer.txt file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4

Please read the README file carefully.

I was able to launch the deepstream-preprocess-app, thanks for the tips.
However, I cannot get it to work in my pipeline. My pipeline is as follows source -> streammux -> nvdspreprocess -> nvinfer. I am using the deepstream-app with my own custom configs. I provided the configs in the first post. My application runs fine, there are no errors in the console:

But as I mentioned, the saved output video has no detections, only the ROI drawn onto it. When i disable the nvdspreprocess plugin, the inference works fine.
Any ideas what could be causing this?
Thanks for all the help thus far!

I am now trying to use the deepstream-preprocess-test app but with my yolov7 model and testing video. Inside the docker container, I launch:
deepstream-preprocess-test config_preprocess.txt config_infer_primary.txt file:///opt/nvidia/deepstream/deepstream-6.2/sources/apps/main/sample_video/test.mp4. I have enabled xhost + before running the container, but I get Error: Decodebin did not pick nvidia decoder plugin. This is the full output:

Any ideas?

Please check whether nvv4l2decoder is in your docker container. You can also check whether the driver is correct by “ls -l /usr/lib/x86_64-linux-gnu/libnvcuvid.so*” in the docker container.

Thanks a lot, issue solved!
For anyone who may have this problem: if you’re using a docker environemnt, don’t use
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES compute,utility,compat32,display
These envs somehow made the decoder plugin inaccessible.