Infer the specified area data

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson)
**• DeepStream Version5.1
**• JetPack Version (valid for Jetson only)4.5
**• TensorRT Version7.x
Hi:
I use example python-test3, successfully access to the 8-way 1080 network IP camera.I just want the program to reason about specific areas of the screen,For example, I only need to detect a 600x600 area in the picture,How can I do that?

There is only a ROI limitation in vertical with gst-nvinfer plugin “roi-top-offset” and “roi-bottom-offset” properties. Gst-nvinfer — DeepStream 6.1.1 Release documentation

You can also use nvvideoconvert plugin to crop the input videos to a new 600x600 video with “src-crop” property. Gst-nvvideoconvert — DeepStream 6.1.1 Release documentation

1 Like

Hi:
I use nvvideoconvert plugin to crop the input videos to a new 600x600 video with “src-crop” property.
But I found a problem:
This is my pipeline,Among them, I have made the following settings for nvvidconv1 (set_property(“src-crop”, “550:200:640:640”))

streammux.link(queue1)
queue1.link(nvvidconv1)
nvvidconv1.link(queue0)

queue0.link(pgie)
pgie.link(queue2)

queue2.link(filter1)
filter1.link(queue6)

queue6.link(face_classifier)
face_classifier.link(queue7)

queue7.link(tiler)
tiler.link(queue3)

queue3.link(nvvidconv)
nvvidconv.link(queue4)
queue4.link(nvosd)

nvosd.link(queue5)
queue5.link(sink)

When I output the picture at queue7, I found that the 640x640 picture was enlarged to 1920x1080.When the resolution of the video stream I input is 1280x720, the 640x640 picture will be enlarged to 1280x720, but I don’t want to scale like this. What should I do?