lease provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
GPU • DeepStream Version
5.0 • JetPack Version (valid for Jetson only) • TensorRT Version
7.1 • NVIDIA GPU Driver Version (valid for GPU only)
440 • Issue Type( questions, new requirements, bugs)
For the debug reason, I have following questions:
Is get_converted_buffer in nvinfer source code invoked in classification mode or only in detection mode?
Imagine a pipeline composed of :
nvinfer(detection) + nvinfer(classification)
dose detection passes each object location(rect_params) to classification directly or it padds the location to a definite size and then passes it to classification?
I have this problem too.
size of (1280, 720) is set in detection config file. when I print destination image size in get_converted_buffer function always 1280,720 is returned and this is the detection output. Although pipeline is composed of two nvinfer one for detection and the other for classification. how can I access the input image to classifier?(images belong to objects)
can I save them?
Ty. I test it soon.
one more question is :
what is infer-dim in classifier config file?
in my case it is 3:150:150. it means each object location with detection will be padded to this size and then gives to classification?
dump_infer_input_to_file.patch.txt is used to dump the image fed to the input layer of the nework, for exmaple, if the dimension and format of your classification is 3:150:150 & RGB, this change dumps a 150x150 RGB viewable image.