Please provide complete information as applicable to your setup.
• Hardware Platform GPU
• DeepStream Version 6.2
• TensorRT Version 8.5.2.2
• NVIDIA GPU Driver Version 525.125.06
• Issue Type( questions, new requirements, bugs) After enabling the NvdsPreprocess plugin in the pipeline, PGIE is not producing any detections in or outside the ROI.
• How to reproduce the issue ?
After configuring config_preprocess.txt
according to my model params and enabling input-tensor-meta
for PGIE, I do get the ROI drawn on the video, however, no predictions are received from the PGIE model.
I use yolov7 640x640 model, based on deepstream logs the input layer name of the model is “input”, so I changed the tensor-name value to “input”:
I copied the rest of preprocessing/model related configs from config_infer_primary.txt
to config_preprocess.txt
.
Here is my config.txt:
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=10
[tiled-display]
enable=0
rows=1
columns=1
width=1400
height=500
gpu-id=0
nvbuf-memory-type=0
[pre-process]
enable=1
config-file=config_preprocess.txt
[source0]
enable=1
type=3
uri=file://../sample_video/test.mp4
num-sources=1
gpu-id=0
cudadec-memtype=0
rtsp-reconnect-interval-sec=30
[sink0]
enable=1
type=3
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0
container=1
codec=1
enc-type=1
profile=0
output-file=output.mp4
bitrate=40000000
[osd]
enable=1
gpu-id=0
border-width=2
text-size=20
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0
[streammux]
gpu-id=0
live-source=0
batch-size=1
batched-push-timeout=40000
#width=2888
#height=962
width=3840
height=2160
enable-padding=0
nvbuf-memory-type=0
attach-sys-ts-as-ntp=1
[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary.txt
input-tensor-meta=1
[tests]
file-loop=0
Here is config_primary.txt:
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
model-engine-file=../yolo/checkpoint/model_b1_gpu0_fp16.engine
labelfile-path=../yolo/labels.txt
onnx-file=../yolo/checkpoint/best_reparametrized.onnx
scaling-filter=1
batch-size=1
network-mode=2
num-detected-classes=1
interval=0
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=../yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
output-tensor-meta=1
[class-attrs-all]
nms-iou-threshold=0.4
pre-cluster-threshold=0.2
topk=300
And here is config_preprocess.txt:
[property]
enable=1
target-unique-ids=1
network-input-order=0
process-on-frame=1
unique-id=5
gpu-id=0
maintain-aspect-ratio=1
symmetric-padding=1
processing-width=640
processing-height=640
scaling-buf-pool-size=6
tensor-buf-pool-size=6
network-input-shape=1;3;640;640
network-color-format=0
tensor-data-type=0
tensor-name=input
scaling-pool-memory-type=0
scaling-pool-compute-hw=0
scaling-filter=1
custom-lib-path=/opt/nvidia/deepstream/deepstream-6.2/lib/gst-plugins/libcustom2d_preprocess.so
custom-tensor-preparation-function=CustomTensorPreparation
[user-configs]
pixel-normalization-factor=0.003921568
[group-0]
src-ids=0
custom-input-transformation-function=CustomAsyncTransformation
process-on-roi=1
draw-roi=1
roi-params-src-0=914;770;2888;962
Any ideas on what I might have missed? Thanks.