Run SGIE classifier depending on PGIE classifier results

Hi,

I need to know if it is possible to run a pipeline using deepstream-app that runs two consecutive classifications with two different models. All I’ve seen regarding pgie and sgies is performed using a detector as pgie and the applying a classifier as sgie, which makes sense but is a bit different from our scenario where the first model is not giving bboxes as an output.

If this is possible, can somebody share a config file of this kind of scenario?

One of the problems I’ve experienced is that apparently the secondary gie does not support using process mode FULL_FRAME since the system is reseting the value automatically to PROCESS_MODE_CLIP_OBJECTS …

• Hardware Platform (Jetson / GPU) - RTX 3090
• DeepStream Version 6.2

please refer to this classification, which will use classication model as pgie. if using two consecutive classification models, please refer to this command:
gst-launch-1.0 filesrc location=blueCar.jpg ! jpegdec ! videoconvert ! video/x-raw,format=I420 ! nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1280 height=720 ! nvinfer config-file-path=./dstest_appsrc_config.txt ! nvinfer config-file-path=./cartype.txt ! nvvideoconvert ! video/x-raw(memory:NVMM),format=RGBA ! nvdsosd ! nvvideoconvert ! video/x-raw,format=I420 ! jpegenc ! filesink location=out.jpg
cartype.txt (3.7 KB)

Thanks for the response, in my case it is not working. I’m using nvinferserver instead of nvinfer for the inferences. Apparently both second and first inferences are running for all the frames, no matter what I set up in the input control for the second model. These are my config files for the two nvinferserver models:

infer_config {
  unique_id: 1
  gpu_ids: [0]
  max_batch_size: 4
  backend {
    inputs: [ {
      name: "input_1"
    }]
    outputs: [
      {name: "Vector_clasificador_final"}
    ]
    triton {
      model_name: "pgie"
      version: -1
      grpc {
        url: "localhost:8011"
        enable_cuda_buffer_sharing: true
      }
    }
  }

  preprocess {
    network_format: IMAGE_FORMAT_RGB
    tensor_order: TENSOR_ORDER_NHWC
    tensor_name: "input_1"
    maintain_aspect_ratio: 0
    frame_scaling_hw: FRAME_SCALING_HW_DEFAULT
    frame_scaling_filter: 3
    normalize {
      scale_factor: 0.0039215697906911373
      channel_offsets: [0, 0, 0]
    }
  }

  postprocess {
     labelfile_path: "./labelstest02.txt"
     classification{
         threshold: 0.5
     }

}

  extra {
    copy_input_to_host_buffers: false
    output_buffer_pool_size: 2
  }
}
input_control {
  process_mode: PROCESS_MODE_FULL_FRAME
}

output_control {
  output_tensor_meta : true
}

infer_config {
  unique_id: 2
  gpu_ids: [0]
  max_batch_size: 4
  backend {
    inputs: [ {
      name: "input_1"
    }]
    outputs: [
      {name: "Vector_clasificador_final"}
    ]
    triton {
      model_name: "sgie"
      version: -1
      grpc {
        url: "localhost:8011"
        enable_cuda_buffer_sharing: true
      }
    }
  }

  preprocess {
    network_format: IMAGE_FORMAT_RGB
    tensor_order: TENSOR_ORDER_NHWC
    tensor_name: "input_1"
    maintain_aspect_ratio: 0
    frame_scaling_hw: FRAME_SCALING_HW_DEFAULT
    frame_scaling_filter: 3
    normalize {
      scale_factor: 0.0039215697906911373
      channel_offsets: [0, 0, 0]
    }
  }

  postprocess {
     labelfile_path: "./sev_labels.txt"
     classification {
         threshold: 0.5
     }
}

  extra {
    copy_input_to_host_buffers: false
    output_buffer_pool_size: 2
  }
}
input_control {
  process_mode: PROCESS_MODE_FULL_FRAME,
  operate_on_gie_id: 1,
  operate_on_class_ids: 1
}

output_control {
  output_tensor_meta : true
}

where is this logic? from the doc nvinferserver, sgie can also support PROCESS_MODE_FULL_FRAME, and nvinferserver is opensource, you can add log in GstNvInferServerImpl::processBatchMeta to debug.

what dose this mean?

I mean I can see the inference result of the two models. Since I’m telling the second inference to run only on a certain class detection from the first inference module I would except to see the second inference running only when the first one detects Class X (1 in my case), but it’s not happening.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

there is no detects Class X because the first gie is not detection model, why not add a detection pgie?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.