Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU 2080 • DeepStream Version 6.0 • JetPack Version (valid for Jetson only) • TensorRT Version 8. • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hello NVIDIA team! Nvdsinfer_custombboxparser processing immediately after nvinfer batch processing like nvinfer or single processing for each stream?
Example: NvDsInferParseYoloV3 function in nvdsparsebbox_Yolo.cpp. This function is called after inferencing engine model (in nvinfer element). My deepstream app run with multi camera. My question is: “NvDsInferParseYoloV3 will be processed in batch or single processing for each stream? How is processing detail?”.
“The postprocessing function is frame level”. So if my app running with 50 cams, postprocessing will be do 50 times (a loop including 50 iter)? Is there any way I can multithread this process?