How nvdspreprocess network-input-shape works

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Jeston nx
• DeepStream Version
dp6.0.2
• JetPack Version (valid for Jetson only)
jetpack 4.6
• TensorRT Version
TRT8.0
• NVIDIA GPU Driver Version (valid for GPU only)

• Issue Type( questions, new requirements, bugs)
questions

hi, when i use nvdspreprocess plugin to run yolov8seg, the detection result are flickering(1roi, network-input-shape=1:3:704:704),when i set the network-input-shape=2:3:704:704, the problem is relieve

the problem is like
u=2930252518,2872815672&fm=224&app=112&f=JPEG
and then change to
u=2930252518,2872815672&fm=224&app=112&f=JPEG

Thanks for the sharing!

  1. about “the problem is relieve”, is the problem still not fixed?
  2. which sample are you testing or referring to? could you help to reproduce this issue? could you provide the configuration files and test video? Thanks!

the preblem is not fixed, but the frequency reduction. The problem can not reproduce for me. the config file is blow(and i am sure not the video reson):
[property]
enable=1
target-unique-ids=1
# 0=NCHW, 1=NHWC, 2=CUSTOM
network-input-order=0
processing-width=704
processing-height=704
scaling-buf-pool-size=6
tensor-buf-pool-size=6
# tensor shape based on network-input-order
network-input-shape= 2;3;704;704
# 0=RGB, 1=BGR, 2=GRAY
network-color-format=0
# 0=FP32, 1=UINT8, 2=INT8, 3=UINT32, 4=INT32, 5=FP16
tensor-data-type=5
tensor-name=images
# 0=NVBUF_MEM_DEFAULT 1=NVBUF_MEM_CUDA_PINNED 2=NVBUF_MEM_CUDA_DEVICE 3=NVBUF_MEM_CUDA_UNIFIED
scaling-pool-memory-type=0
# 0=NvBufSurfTransformCompute_Default 1=NvBufSurfTransformCompute_GPU 2=NvBufSurfTransformCompute_VIC
scaling-pool-compute-hw=0
# Scaling Interpolation method
# 0=NvBufSurfTransformInter_Nearest 1=NvBufSurfTransformInter_Bilinear 2=NvBufSurfTransformInter_Algo1
# 3=NvBufSurfTransformInter_Algo2 4=NvBufSurfTransformInter_Algo3 5=NvBufSurfTransformInter_Algo4
# 6=NvBufSurfTransformInter_Default
scaling-filter=1
custom-lib-path=/opt/nvidia/deepstream/deepstream-6.0/lib/gst-plugins/libcustom2d_preprocess.so
custom-tensor-preparation-function=CustomTensorPreparation

[user-configs]
pixel-normalization-factor=0.003921568
#mean-file=
#offsets=

[group-0]
src-ids=0
process-on-roi=1
roi-params-src-0=750;60;1000;1000

  1. which sample are you testing or referring to? did you add custom code? could you share the whole media pipeline?
  2. To isolate this issue, can you use the pipeline without nvdspreprocess to test? if this issue can’t be reproduced, it should be nvdspreprocess’s issue; if this issue still can be reproduced, it should be nvinfer issue. please refer to yolov8seg sample.

when i use nvinder plugins for entire frame infer, the problem vanished. So i think the problem just in nvdspreprocess plugin. i am sorry for that i can not reproducing the problem in other scenes. but it may relationed to device status because when the flash comes, the cpu pressure is over 70%. I wonder that why network-input-shape can affect the device status because when it set to 2:3:704:704, the NX cpu pressure is lower than 1:3:704:704

nvdspreprocess plugin is opensource. the first parameter of network-input-shape is batch-size. it will affect the inference performance. please refer to “NOTE:” part of opt\nvidia\deepstream\deepstream\sources\gst-plugins\gst-nvdspreprocess\README for how to set network-input-shape.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.