Nvinfer padding

lease provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
GPU
• DeepStream Version
5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
7.1
• NVIDIA GPU Driver Version (valid for GPU only)
440
• Issue Type( questions, new requirements, bugs)
question

For the debug reason, I have following questions:

  1. Is get_converted_buffer in nvinfer source code invoked in classification mode or only in detection mode?

  2. Imagine a pipeline composed of :
    nvinfer(detection) + nvinfer(classification)
    dose detection passes each object location(rect_params) to classification directly or it padds the location to a definite size and then passes it to classification?

  1. it is for all mode, nvinfer is opensource, you can add logs to debug.
  2. it will be converted to model’s size.

I have this problem too.
size of (1280, 720) is set in detection config file. when I print destination image size in get_converted_buffer function always 1280,720 is returned and this is the detection output. Although pipeline is composed of two nvinfer one for detection and the other for classification. how can I access the input image to classifier?(images belong to objects)
can I save them?

please refer to dump_infer_input_to_file.patch.txt in DeepStream SDK FAQ - #9 by mchi, it can save the input images.

Ty. I test it soon.
one more question is :
what is infer-dim in classifier config file?
in my case it is 3:150:150. it means each object location with detection will be padded to this size and then gives to classification?

this file is too messy. should I add it nvinfer source code? Can it save objects surrounded area which is given to classifier?

yes, nvinfer is opensource, do you mean padding? yes, this data will meet model 's dimension requirement.

I think you didn’t understand my question.
I know nvinfer is opensource :)))
questions :

  1. what is exactly infer-dims in classification config file?
  2. I mean symmetric padding in nvinfer source code.
  3. Can dump_infer_input_to_file.patch.txt save images which is given classification?
    images means surrounded area near each objects which detection has detected them.

The answer for both these two can be found in DS doc - Gst-nvinfer — DeepStream 6.3 Release documentation . infer-dims means the input dimensions of your classification network. “symmetric padding” is supported with “symmetric-padding=1” config.

dump_infer_input_to_file.patch.txt is used to dump the image fed to the input layer of the nework, for exmaple, if the dimension and format of your classification is 3:150:150 & RGB, this change dumps a 150x150 RGB viewable image.

I test it soon.
Ty mchi

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.