I am using
nvdsanalytics to detect which objects are inside a given region of interest. Everything seems to work fine, but I can’t see the ROI drawn on my image.
I am using DS6.0 with its devel container, Python bindings, and a Tesla T4.
The following is my config-file:
[property] enable=1 config-width=1920 config-height=1080 #osd-mode 0: Dont display any lines, rois and text # 1: Display only lines, rois and static text i.e. labels # 2: Display all info from 1 plus information about counts osd-mode=2 display-font-size=12 [roi-filtering-stream-0] enable=1 roi-RF=20;20;500;500;500;20;500 inverse-roi=0 class-id=-1
I didn’t set any property of the nvdsanalytics besides
config-file. Also, my pipeline is running only one stream. See pdf attached.
Using a probe, I see from the metadata that some objects are detected as inside of the ROI, but if get the numpy frame using
pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id) I don’t see any ROI drawn on the image. Shouldn’t
osd-mode=2 be sufficient to show the ROI?
Also, I couldn’t find information about how to specify the ROI coordinates:
- What corner considered the origin (i.e. point (0,0)) ?
- Are the coordinates specify as pixels? If yes, do they refer to the scaled image as speficied in the config-file of nvdsanalytics, nvstreammux or the original video streams?
pipeline.pdf (40.8 KB)