Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson & dGPU
• DeepStream Version 6.0.1
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hi, I have a question on configuring preprocessing file that is available since deepstream 6.0.
Something that I want to implement is dividing an input source image into 3 4 different images (like sliding windows) and the full frame so that primary infer engine detects image on each image and the full frame and combine the result at the end. I believe that it will increase mAP on small objects. This idea came from Tiling or SAHI.
Is it something can be done by configuring preprocessing file? Or should I implement a custom util to do so?
Also, I am having some issues when interpreting configure files.
Are processing width and height the full frame width and height (So input images from source will be resized to this width and height)? Or are they a way of specifying ROIs?
And, does either scaling-buf-pool-size or tensor-buf-pool-size affect network-input-shape? I guess N in network input represents the number of frame that will be stacked into a buffer, but then what do buf-pool-size do? Is there any document that I can refer to?
Thanks a lot in advance.