What is the best way for drawing some alarm zones in Deepstream?

Hi everyone,

I am trying to develop Deepstream application and set some alarms to control arduino by sending signals. Now I am using this code for drawing alarm zones

    # Draw alarm zones
    display_meta = pyds.nvds_acquire_display_meta_from_pool(batch_meta)
    for zone in u_data:
        rect_params = display_meta.rect_params[zone['id']]
        rect_params.left = zone['left']
        rect_params.top = zone['top']
        rect_params.width = zone['width']
        rect_params.height = zone['height']
        rect_params.border_width = 3
        rect_params.border_color.set(1.0, 0.0, 0.0, 1.0)  # Red

    display_meta.num_rects = len(u_data)
    pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)

But I am wondering if can I use OpenCV to trigger my alarms instead of using probes? Because what I am afraid is reduce the performance of system unnecessarily because I wont display camera output from jetson device always.

Thank you for advance

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Nano
• DeepStream Version 6
• JetPack Version (valid for Jetson only) 4
• TensorRT Version 8
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

The way you’re using can get the best performace. It used GPU to draw the bbox. But if you want to use that with OpenCV, you have to copy the image from gpu to cpu. After drawing that, you have to copy that from the cpu to gpu again. This can significantly degrade the performance.