Could not turn off async-mode

I have custom triton ensemble model to process face information extraction:
person cropped image (from PGIE) → face detection → align (python backend) → face embedding, gender, glass, mask → post processing
I want it to run for every frame and person object so I set async_mode=false (and also interval=0), but look like it did not work. I have print out console log:

Obj 0 ndetect 1 bbox [129.40001  41.2     210.475   139.65001]
Infered, got face from custom C++ parser:129.400009,41.200001,210.475006,139.650009
Frame 4 probe got:  [129.400009, 41.200001, 210.475006, 139.650009]
============================================================
Frame 5 probe got nothing
============================================================
Frame 6 probe got nothing
============================================================
Frame 7 probe got nothing
============================================================
Frame 8 probe got:  [129.400009, 41.200001, 210.475006, 139.650009]
============================================================
Frame 9 probe got:  [129.400009, 41.200001, 210.475006, 139.650009]
============================================================
Frame 10 probe got:  [129.400009, 41.200001, 210.475006, 139.650009]
============================================================
Frame 11 probe got:  [129.400009, 41.200001, 210.475006, 139.650009]
============================================================
...

Config file:

################################################################################
# Copyright (c) 2021 NVIDIA Corporation.  All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################

infer_config {
  unique_id: 7
  gpu_ids: [0]
  max_batch_size: 1
  backend {
    inputs: [ {
      name: "original_image"
    }]
    outputs: [
      {name: "res_num_detections"},
      {name: "res_bboxes"},
      {name: "res_scores"},
      {name: "res_landmarks"},
      {name: "res_embedding"},
      {name: "res_gender"},
      {name: "res_glass"},
      {name: "res_mask"}
    ]
    trt_is {
      model_name: "ens_face_detect_align_embed_attr"
      version: -1
      model_repo {
        root: "/deepstream/triton-server/models"
        strict_model_config: false
        log_level: 1
      }
      
    }
  }

  preprocess {
    network_format: IMAGE_FORMAT_RGB
    tensor_order: TENSOR_ORDER_LINEAR
    tensor_name: "original_image"
    maintain_aspect_ratio: 1

  }

  postprocess {
    classification {
      custom_parse_classifier_func: "NvDsInferParseCustomFaceEmbeddingAttribute"
    }
  }

  custom_lib {
      path : "/deepstream/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so"
  }

  extra {
    copy_input_to_host_buffers: false
  }
}
input_control {
  operate_on_gie_id: 1
  operate_on_class_ids: 0
  process_mode: PROCESS_MODE_CLIP_OBJECTS
  async_mode: false
  interval: 0
  object_control {
    bbox_filter {
      min_width: 96
      min_height: 96
      }
    }
}
output_control {
  output_tensor_meta: false
}

I’ve tried difference trackers: DCF & IOU but it got similar result.

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): A100
• DeepStream Version: DS 6.0-triton
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

How can you ensure is run on " every frame and person object"?

Thank for your attention. Currently in gst-nvinfer & gst-nvinferserver:

The object is inferred upon only when it is first seen in a frame (based on its object ID) or when the size (bounding box area) of the object increases by 20% or more

I want to re-inference this SGIE model even the size of the object increases < 20%. I found that I can modify and recompile with gst-nvinfer but could not found the way with gst-nvinferserver. My ensemble model is some difficult to implement with gst-nvinfer.

Can you have a try with “secondary-reinfer-interval”?

Gst-nvinfer — DeepStream 6.0.1 Release documentation (nvidia.com)

Already set but look like it didn’t work

################################################################################
#
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
################################################################################

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
# Skip frame
#interval=2
#0=RGB, 1=BGR
model-color-format=0
input-dims=3;608;608;0
onnx-file=../weights/object_detection/yolor_csp_x_star-nms.onnx
model-engine-file=../weights/object_detection/yolor_csp_x_star-nms.onnx_b4_gpu0_fp16.engine
labelfile-path=../labels.txt
batch-size=4
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=80
gie-unique-id=1
network-type=0
## 0=Group Rectangles, 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
cluster-mode=2
maintain-aspect-ratio=1
custom-lib-path=../nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
parse-bbox-func-name=NvDsInferParseCustomYolor
#scaling-filter=0
#scaling-compute-hw=0
secondary-reinfer-interval=0

[class-attrs-all]
nms-iou-threshold=0.6
pre-cluster-threshold=0.4

Can you have a try to set “process-mode=2”?

Gst-nvinferserver — DeepStream 6.0.1 Release documentation (nvidia.com)

I tried but it’s the same as set process_mode: PROCESS_MODE_CLIP_OBJECTS in my triton config file

Can you have a try with latest DS6.1?

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.