Gst-nvdspreprocess can be used for multiple input model?

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU) : None
**• DeepStream Version: 6.0 or later
**• JetPack Version (valid for Jetson only) : None
**• TensorRT Version : None
**• NVIDIA GPU Driver Version (valid for GPU only) : None
**• Issue Type( questions, new requirements, bugs) : Questions
**• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) : None
**• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description) : None


Gst-nvdspreprocess can be used for multiple input model?
Or would it only work for models with a single input?

How can I implement a multiple-input model if I can only use Gst-nvdspreprocess for a single-input model? (using Gst-nvdsvideotemplate is the only way?)

Do you mean the model with multiple input layers?

Yes. e.g., Gaze Estimation | NVIDIA NGC

Yes, it can. You need to implement the cuatomized tensor meta part by yourself.

For Gaze Estimation, we already have a sample with nvdsvideotemplate . deepstream_tao_apps/apps/tao_others/deepstream-gaze-app at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub

What exactly would the “cuatomized tensor meta part” code look like?
I have read through the nvdspreprocess documentation but could not figure it out.

Do you have an example code (for Gst-nvdspreprocess + multiple-input model)?

nvdspreprocess source code is in /opt/nvidia/deepstream/sources/gst-plugins/gst-nvdspreprocess

gst-nvinfer source code is in /opt/nvidia/deepstream/sources/gst-plugins/gst-nvinfer and
/opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer

You need to customize the function by yourself.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.