Process the detected PGIE objects to match the input size of the SGIE

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) NVIDIA GeForce RTX 4070 Laptop GPU
• DeepStream Version 7.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.6.1.6-1+cuda12.0
• NVIDIA GPU Driver Version (valid for GPU only) 535.216.01
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I have an application (in python) with a PGIE and a SGIE. The PGIE is a detector and the SGIE is a classification with input size of 224x224. How can I guarantee that the SGIE is receiving an input of 224x224? Is there any Python app sample that shows how to match the PGIE output with the SGIE input?

Generally speaking, no operation is required. For models in onnx format, nvinfer will automatically parse the model’s input width and height.

DS-7.1 no longer supports models that are not in onnx format.

For DS-7.0, you can refer to the following configuration. The infer-dims parameter is used to specify the width and height of the model input.

The model that I`m using is an .engine. No operation is also requeried?

When tensorrt/trtexec generates the engine, you need to specify the width and height of the input layer.

After the conversion is completed, there is usually no need for additional settings in DeepStream

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.