Secondary inference using nvinferserver after deepstream-ssd-parser

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU Tesla T4
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only) 450.119.03
• Issue Type( questions, new requirements, bugs) questions, (bugs?)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
deepstream-ssd-parser sample app. Trying to add secondary inference (a multi-label classifier using nvinferserver) on primary detection objects. The pipeline works i.e. no issues with model loading and running the pipeline. But not able to get any classifier objects on using a probe function at either sgie source or nvvidconv sink (next element). The obj_meta.obj_user_meta_list is always None. My probe function -

def sgie1_src_pad_buffer_probe(pad, info, u_data):
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return
    
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list
    
    while l_frame is not None:
        try:
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
            
        except StopIteration:
            break
            
        l_obj = frame_meta.obj_meta_list
        count = frame_meta.num_obj_meta
        
        
        while l_obj is not None:
            try:
                obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
                 
            except StopIteration:
                break
                
            l_class = obj_meta.obj_user_meta_list
            print(l_class)

            while l_class is not None:
                l_user = pyds.NvDsUserMeta.cast(l_class.data)
                
                if (
                        l_user.base_meta.meta_type
                        != pyds.NvDsMetaType.NVDSINFER_TENSOR_OUTPUT_META
                ):
                    continue

                tensor_meta = pyds.NvDsInferTensorMeta.cast(l_user.user_meta_data)

                layers_info = []

                for i in range(tensor_meta.num_output_layers):
                    layer = pyds.get_nvds_LayerInfo(tensor_meta, i)
                    layers_info.append(layer)

                    frame_object_list = nvds_infer_parse_custom_tf(layers_info, count)

                for frame_object in frame_object_list:
                    add_classifier_obj_meta_to_frame(frame_object, batch_meta, obj_meta)
                try:
                    l_class = l_class.next
                except StopIteration:
                    break

            try:
                l_obj = l_obj.next
            except StopIteration:
                break

        try:
            l_frame = l_frame.next
        except StopIteration:
            break
    return Gst.PadProbeReturn.OK

My pipeline-

streammux.link(queue1)
queue1.link(pgie)
pgie.link(sgie1)
sgie1.link(nvvidconv)
nvvidconv.link(nvosd)
nvosd.link(queue5)
queue5.link(nvvidconv2)
nvvidconv2.link(capsfilter)
capsfilter.link(encoder)
encoder.link(codeparser)
codeparser.link(container)
container.link(sink)

Same probe function I have tried while keeping peoplenet TLT model as the primary inference using nvinfer and it works great, I get the output tensors of the classifier model.

Is this some issue when we add object meta to frame as it is there in the deepstream-ssd-parser example and this object does not go as input to the secondary classifier?

I am confused here, whether secondary inference is happening at all as I am not getting any errors anywhere but also can’t read the output.

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

More Information:
My pgie config-

infer_config {
  unique_id: 1
  gpu_ids: [0]
  max_batch_size: 4
  backend {
    trt_is {
      model_name: "RGB_FACE_DETECT"
      version: -1
      model_repo {
        root: "../MODELS/TRT_MODEL_REPO/"
        strict_model_config: true
      }
    }
  }

  preprocess {
    network_format: IMAGE_FORMAT_RGB
    tensor_order: TENSOR_ORDER_NONE
    maintain_aspect_ratio: 0
    normalize {
      scale_factor: 1.0
      channel_offsets: [0, 0, 0]
    }
  }

  postprocess {
    labelfile_path: "../MODELS/TRT_MODEL_REPO/RGB_FACE_DETECT/labels.txt"
    other {}
  }

  extra {
    copy_input_to_host_buffers: false
  }

  custom_lib {
    path: "/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_infercustomparser.so"
  }
}
input_control {
  process_mode: PROCESS_MODE_FULL_FRAME
  interval: 0
}
output_control {
  output_tensor_meta: true
}

My sgie config -

infer_config {
  unique_id: 3
  gpu_ids: [0]
  max_batch_size: 16
  backend {
    inputs: [ {
      name: "x"
    }]
    outputs: [
      {name: "sequential/output_layer/Sigmoid"}
    ]
    trt_is {
      model_name: "GHM_CLASS"
      version: -1
      model_repo {
        root: "../MODELS/TRT_MODEL_REPO/"
        strict_model_config: true
      }
    }
  }

  preprocess {
    network_format: IMAGE_FORMAT_RGB
    tensor_order: TENSOR_ORDER_NONE
    maintain_aspect_ratio: 0
    normalize {
      scale_factor: 1.0
      channel_offsets: [0, 0, 0]
    }
  }

  postprocess {
    other {}
  }
  
  extra {
    copy_input_to_host_buffers: false
  }

  custom_lib {
    path: "/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_infercustomparser.so"
  }

}
input_control {
  process_mode: PROCESS_MODE_CLIP_OBJECTS
  operate_on_gie_id: 1
  operate_on_class_ids: [1]
  interval: 0
  async_mode: false
}
output_control {
  output_tensor_meta: true
}

Was able to resolve this, on setting output_tensor_meta: false in my pgie config and setting the following properties -

postprocess {
    labelfile_path: "../MODELS/TRT_MODEL_REPO/RGB_FACE_DETECT/labels.txt"
    detection {
      num_detected_classes: 2
      custom_parse_bbox_func: "NvDsInferParseCustomTfSSD"
      nms {
        confidence_threshold: 0.5
        iou_threshold: 0.3
        topk : 20
      }
    }

The python SSD parser as in the deepstream-ssd-parser example seems to be the problem here. The detection object’s get’s hidden to the secondary inference.

Glad to know issue resolved, thanks for the update.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.