Nvinferserver for models which does not have frame inputs

• Hardware Platform (Jetson / GPU) dGPU, RTXP2080
• DeepStream Version 6.0.1
• JetPack Version (valid for Jetson only) N/A
• TensorRT Version 8.0.1
• NVIDIA GPU Driver Version (valid for GPU only) 495.29.05
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) describe later
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I understand there are some related topics,
but since this issue is important to me, let me restate it.

In “nvinferserver”, is there any way to run a model whose inputs do not include Frame-Input (preprocessed frame input)?

I don’t care if it’s a workaround
(e.g., add a dummy name in the “tensor-name” in nvinfer-setting, Or, put a dummy input in trition’s config.pbtxt).

If this is not possible, I feel that nvinferserver’s capabilities are much smaller than triton’s capabilities…

related issue

  1. currently nvinfersever still dose not support tensor-meta input, please refer to
    the latest doc: Gst-nvinferserver — DeepStream 6.1.1 Release documentation
  2. what do you mean about “preprocessed frame input”? nvinferserver support Secondary mode, it can process objects created by PGIE.
  1. I understand
  2. I was referring to the “secondary inference” input.

I understand that inference for tensor-meta input is not possible with nvinferserver.
Thanks for the response.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.