Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU 2080
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hello NVIDIA team! Nvdsinfer_custombboxparser processing immediately after nvinfer batch processing like nvinfer or single processing for each stream?
Can you help elaborate the context of your question? Where is the "Nvdsinfer_custombboxparser " in your example?
Example: NvDsInferParseYoloV3 function in nvdsparsebbox_Yolo.cpp. This function is called after inferencing engine model (in nvinfer element). My deepstream app run with multi camera. My question is: “NvDsInferParseYoloV3 will be processed in batch or single processing for each stream? How is processing detail?”.
NvDsInferParseYoloV3 is just postprocessing which parses the output layers of the network into bboxes. The postprocessing function is frame level.
gst-nvinfer and the function you mentioned are totally open source. You can read the code for details.
“The postprocessing function is frame level”. So if my app running with 50 cams, postprocessing will be do 50 times (a loop including 50 iter)? Is there any way I can multithread this process?
It is already multithread process. You can also modify the code in gst-nvinfer as you like. It is open source.
Thank you very muck beacause of your helping!
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.