Bounding box coordinates after nvinfer

Please provide complete information as applicable to your setup.

All• Hardware Platform (Jetson / GPU)**
All• DeepStream Version**
All• JetPack Version (valid for Jetson only)**
All• TensorRT Version**
All• NVIDIA GPU Driver Version (valid for GPU only)**
question• Issue Type( questions, new requirements, bugs)**
run deepstream-app• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)**
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I didn’t find any information in the description of GST-nvinfer, or in Deepstream SDK, or in earlier issues.

The nvinfer is after streamux in the pipeline. For streammux we can give output size, for example width=1280 and height=720.
For nvinfer we can give a config file with infer-dims=infer-dims=3;224;224 for example.
Then nvinfer will resize the 1280x720 output of streammux to 224x224 as input.
My qustion is that the output of nvinfer will be mapped to 1280x720, or corresponds to 224x224?
When I write out the ouput of the pipeline in a probe function, I have to know that which resolution the coordinates corresponds to.
Thanks, András

The gst-nvinfer plugin source code is open source. It is an “in-place” transform plugin. So the output is the input. Transform elements
GstBaseTransform
This is a very basic concept of gstreamer. Please make sure you are familiar with GStreamer before you start with DeepStream.

The resized image is only for the TensorRT model inferencing, it will not be passed to downstream components in the pipeline.

The bbox coordinates are mapped to video resolution since the downstream component will never know anything about the model.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.