Question about the preprocess plugin queue?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

  1. When i see the gstnvpreprocess source code , i cannot understand some design of code. what`s preprocess_queue meaning? If no preprocess_queue , and just insert gst-queue plugin among the elements ,like this:

preprocess → gst-queue → infer → gst-queue → postprocess (with no internal preprocess_queue)

what`s the different between them? And In the preprocess_queue , it seems not control the queue length. What gains can preprocess_queue can bring ?
2. Other question, do most of the gstreamer original plugins has its own process_queue internal ? Or it just deal with the meta synchronized ? If i want multithread , only through the gst-queue plugin ?
thanks for your help

Just a basic skill to do some process asynchronously, no special meaning. With the sample library, the scaling and custom-tensor-preparation-function are doing in different threads by “preprocess_queue”.

It is normal to have multiple threads inside any gstreamer plugin. The gstreamer framework is also multiple threads based. I don’t understand what kind of multithread you want. The basic knowledge of gstreamer can be obtained through gstreamer community. https://gstreamer.freedesktop.org/

Thank u . i understand your meaning .

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.