NvInferServer asynchronous mode is applicable for secondaryclassifiers only. Turning off asynchronous mode

I looked at the documentation for nvinferserver but it is not clear how to set a Triton Server mode as a secondary model. I haven’t found an example either.
I tried to do that, and it seems to work fine, but I get the following error:

Warning: gst-library-error-quark: NvInferServer asynchronous mode is applicable for secondaryclassifiers only. Turning off asynchronous mode (5): gstnvinferserver_impl.cpp(352): validatePluginConfig (): /GstPipeline:pipeline0/GstNvInferServer:mymodel-nvinference-engine

Why is that? How can I get to run it asynchronously? This is my model configuration.

infer_config {
  unique_id: 10
  gpu_ids: [0]
  max_batch_size: 64
  backend {
    trt_is {
      model_name: "mymodel"
      version: -1
      model_repo {
        root: "/src/pipeline/models/repository"
        log_level: 2
      }
    }
  }

  preprocess {
    network_format: IMAGE_FORMAT_RGB
    tensor_order: TENSOR_ORDER_LINEAR
    maintain_aspect_ratio: 0
    normalize {
      scale_factor: 1
      channel_offsets: [0, 0, 0]
    }
  }

  postprocess {
     other {}
  }

  extra {
    copy_input_to_host_buffers: false
  }

  custom_lib {
    path: "/opt/nvidia/deepstream/deepstream/lib/libnvds_infercustomparser.so"
  }
}
input_control {
  operate_on_gie_id: 1
  operate_on_class_ids: [0]
  async_mode: 1
  object_control {
    bbox_filter {
      min_width: 64
      min_height: 64
    }
  }
  process_mode: PROCESS_MODE_CLIP_OBJECTS
  interval: 0
}

output_control {
  output_tensor_meta: true
}

Following the documentation here Gst-nvinferserver — DeepStream 6.0 Release documentation I also tried to set process_mode: PROCESS_MODE_FULL_FRAME at the root level of the configuration file, but then I get the error:

[libprotobuf ERROR /home/amkale/jitendrak/Triton-Github/client/build/_deps/repo-third-party-build/grpc-repo/src/grpc/third_party/protobuf/src/google/protobuf/text_format.cc:317] Error parsing text-format nvdsinferserver.config.PluginControl: 50:13: Message type "nvdsinferserver.config.PluginControl" has no field named "process_mode".

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

  • Nvidia T4
  • Deepstream 6.0 Triton container