Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) gpu
• DeepStream Version 5.0.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.0
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
i’m reading the sample code of deepstream-infer-tensor-meta-test because I need a solution to do postprocess for one of my models which output’s type is not belong to (classifer,detector, segmentation). So I think do the postprocess in data probe might be a good idea. But the variable:“use_device_mem” in sample code makes me confused.
the output of model output layers is on device or host?
if its on device or host， why use a variable to determine the cudaMemcpy?
thank you !