The inference works outside of the ROI (set by nvdspreprocess)

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Jetson
• DeepStream Version
6.2
• JetPack Version (valid for Jetson only)
5.1.0

I have a deepstream pipeline where I want the infernce to happen only on a ROI window. I set the config but It seems inference is happening outside the ROI too. (but when I run the example it works correcly)

my settings for preprocess element

[property]
enable=1
    # list of component gie-id for which tensor is prepared
target-unique-ids=1
    # 0=NCHW, 1=NHWC, 2=CUSTOM
network-input-order=0
    # 0=process on objects 1=process on frames
process-on-frame=1
    #uniquely identify the metadata generated by this element
unique-id=5
    # gpu-id to be used
gpu-id=0
    # if enabled maintain the aspect ratio while scaling
maintain-aspect-ratio=1
    # if enabled pad symmetrically with maintain-aspect-ratio enabled
symmetric-padding=1
    # processig width/height at which image scaled
processing-width=300
processing-height=300
    # max buffer in scaling buffer pool
scaling-buf-pool-size=6
    # max buffer in tensor buffer pool
tensor-buf-pool-size=6
    # tensor shape based on network-input-order
network-input-shape= 1;3;300;300
    # 0=RGB, 1=BGR, 2=GRAY
network-color-format=1
    # 0=FP32, 1=UINT8, 2=INT8, 3=UINT32, 4=INT32, 5=FP16
tensor-data-type=5
    # tensor name same as input layer name
tensor-name=Input
    # 0=NVBUF_MEM_DEFAULT 1=NVBUF_MEM_CUDA_PINNED 2=NVBUF_MEM_CUDA_DEVICE 3=NVBUF_MEM_CUDA_UNIFIED
scaling-pool-memory-type=0
    # 0=NvBufSurfTransformCompute_Default 1=NvBufSurfTransformCompute_GPU 2=NvBufSurfTransformCompute_VIC # default was 0, ganindu changed that to 1
scaling-pool-compute-hw=1
    # Scaling Interpolation method
    # 0=NvBufSurfTransformInter_Nearest 1=NvBufSurfTransformInter_Bilinear 2=NvBufSurfTransformInter_Algo1
    # 3=NvBufSurfTransformInter_Algo2 4=NvBufSurfTransformInter_Algo3 5=NvBufSurfTransformInter_Algo4
    # 6=NvBufSurfTransformInter_Default
scaling-filter=0
    # custom library .so path having custom functionality
custom-lib-path=/opt/nvidia/deepstream/deepstream/lib/gst-plugins/libcustom2d_preprocess.so
    # custom tensor preparation function name having predefined input/outputs
    # check the default custom library nvdspreprocess_lib for more info
custom-tensor-preparation-function=CustomTensorPreparation

[user-configs]
   # Below parameters get used when using default custom library nvdspreprocess_lib
   # network scaling factor
pixel-normalization-factor=1.0
   # mean file path in ppm format
#mean-file=
   # array of offsets for each channel
#offsets=

[group-0]
src-ids=0
custom-input-transformation-function=CustomAsyncTransformation
process-on-roi=1
roi-params-src-0=300;100;1200;900
draw-roi=1

What is happenning to me is very similart to this forum post Gst-nvdspreprocess set roi as input for pgie

it is a shame 013848678 went AFK before the problem was solved.

my pipieline is overview is

other upstream stuff --> streammux --> nvdspreprocess --> nvinfer --> other downstream stuff

I wonder if some setting in nvvinfer is overriding the ROI setting or I’m missing some very obvious config toggle?

Cheers,
Ganindu.

element in question:

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvdspreprocess.html

related topic (unresolved):

P.S

network-input-shape= 1;3;300;300 # (from Gst-nvdspreprocess config)
infer-dims=3;300;300 # (from nvinfer(pgie: no other GIEs used in this case) config)

so I believe nvdspreproccess’s newtwok-input-shape is the same as model’s input shape.

please set nvinfer’s input-tensor-meta to 1 in configuration file. if still can’t work, please share the nvifner’s configuration file.

Thanks a lot for getting back to me!

your answer was almost correct! It pointed me in the right direction!!

it is actually

input-tensor-from-meta=1

in the infer config file!

Thanks a lot again!!

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.