I am using the jetson orinano device deepstream7.0.
When using deepstream-app-c deepstream_app_config.txt to invoke the yolov8seg model, the result is as follows:
./DeepStream-Yolo-Seg/config_infer_primary_yoloV8_seg.txt:
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=yolov8s-seg.onnx
model-engine-file=yolov8s-seg.onnx_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
network-mode=0
num-detected-classes=80
interval=0
gie-unique-id=1
process-mode=1
network-type=3
cluster-mode=4
maintain-aspect-ratio=1
symmetric-padding=1
#workspace-size=2000
parse-bbox-instance-mask-func-name=NvDsInferParseYoloSeg
custom-lib-path=nvdsinfer_custom_impl_Yolo_seg/libnvdsinfer_custom_impl_Yolo_seg.so
output-instance-mask=1
segmentation-threshold=0.5
[class-attrs-all]
pre-cluster-threshold=0.25
topk=100
The mask is translucent and covers the target area, but I used python to replicate the effect using the following elements: nvarguscamerasrc-nvvidconv-capsfilter-tee-queue-nvstreammux-nvinfer-nvdsosd-nvjpegenc-appsink (calling the csi camera and real-time reasoning to output the mjpeg stream). I added a probe to nvdsosd to find the outline of the mask from the metadata and draw it on the image frame. The function code is as follows. But I don’t think this is a good method, because I can’t find a function that can directly fill the mask area with a translucent color, so I use my own circle drawing method. It’s very slow, very slow, very stuttered, and doesn’t work properly.
def parse_seg_mask_from_meta(frame_meta, obj_meta):
# 获取展平的浮点数置信度掩码
data = obj_meta.mask_params.get_mask_array()
# 获取掩码的高度和宽度
mask_height, mask_width = obj_meta.mask_params.height, obj_meta.mask_params.width
# 将一维数据重塑为 2D 浮点数掩码
confidence_mask = data.reshape((mask_height, mask_width))
# 获取物体边界框参数
bbox_left = obj_meta.rect_params.left
bbox_top = obj_meta.rect_params.top
bbox_width = obj_meta.rect_params.width
bbox_height = obj_meta.rect_params.height
# 计算缩放因子,将掩码大小映射到边界框大小
scale_x = bbox_width / mask_width
scale_y = bbox_height / mask_height
# 设置阈值,将置信度掩码转换为二值掩码
threshold = 0.5
binary_mask = (confidence_mask > threshold).astype(np.uint8)
# 获取显示元数据以在帧上绘制
batch_meta = frame_meta.base_meta.batch_meta
display_meta = pyds.nvds_acquire_display_meta_from_pool(batch_meta)
# 用于在掩码区域进行涂色
color = (0, 255, 0) # 绿色填充,可以根据需要更改
# 创建一个列表来存储所有要绘制的点
points_to_draw = []
# 遍历掩码并将识别物体区域映射到帧上
for y in range(mask_height):
for x in range(mask_width):
if binary_mask[y, x] == 1: # 如果掩码中该点的值为1,则表示物体区域
xc_original = int(bbox_left + x * scale_x)
yc_original = int(bbox_top + y * scale_y)
points_to_draw.append((xc_original, yc_original))
# 统一更新显示元数据,而不是在每个点处都调用
for point in points_to_draw:
xc_original, yc_original = point
if display_meta.num_circles >= MAX_ELEMENTS_IN_DISPLAY_META:
pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)
display_meta = pyds.nvds_acquire_display_meta_from_pool(batch_meta)
# 设置圆形参数,绘制成小圆点,模拟涂色
circle_params = display_meta.circle_params[display_meta.num_circles]
circle_params.xc = xc_original
circle_params.yc = yc_original
circle_params.radius = 2
circle_params.circle_color.red = 0.0
circle_params.circle_color.green = 1.0 # 绿色
circle_params.circle_color.blue = 1.0
circle_params.circle_color.alpha = 0.05
display_meta.num_circles += 1
# 将display_meta添加到帧中
pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)
Later, I learned about the python sample program about deepstream-segmentation in the official sample code, and I learned that it uses nvsegvisual, an element specifically used to draw masks, which is the effect of running through the sample code (unet model). I noticed that the colors are opaque, and I can’t change the color in this nvsegvisual element. And I changed the infer configuration file in the sample code to the yolob8seg model I used above, and the result did not output a mask. So I guess the command deepstream-app-c does not use the element of unit nvsegvisual to use the yolov8seg model? Or how it works.
So I’m here to ask for help:
- I would like to ask deepstream-app-c config.txt this command is to use what elements appear in the translucent mask to cover the target area and respond quickly.
How the 2.nvsegvisual component changes the color and translucency of the mask. - Why use deepstream_python_apps/apps/deepstream-segmentation/deepstream_segmentation.py at master NVIDIA-AI-IOT/deepstream_python_apps (github.com) this sample code to change to the configuration file of yolov8seg, run out and render without mask.
Looking forward to your reply~
