Output of createSSDDetectionOutputPlugin


I am trying to implement SSD with TensorRT and I am wondering about two things with the createSSDDetectionOutputPlugin, with documentation provided here: https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/topics/_nv_infer_plugin_8h.html

I am wondering if it is correct that I am after every run of inference going to be output keep_top_k detections, because with Caffe I get detections according to how many the network finds in the picture.

This is how the layer definition looks like:

layer {
  name: "detection_out"
  type: "DetectionOutput"
  bottom: "mbox_loc"
  bottom: "mbox_conf_flatten"
  bottom: "mbox_priorbox"
  top: "detection_out"
  detection_output_param {
    num_classes: 4
    share_location: true
    background_label_id: 0
    nms_param {
      nms_threshold: 0.45
      top_k: 400
    code_type: CENTER_SIZE
    keep_top_k: 200
    confidence_threshold: 0.01

Another thing I am wondering about is, why does the numbers of output blobs, according to the API, need to be 2? If you look at the layer definition over, there is only one top blob. So when I run the parser I get the error "“Plugin layer output count is not equal to caffe output count”. Should I create a dummy top blob to fix this, or will this mess with the output of the layer?


createSSDDetectionOutputPlugin is not officially released and also not in our support scope.
We don’t have more information can share with you.

Sorry for the inconvenience.