Hello, I have a neural net that takes an input image and produces a depth map.
According to this topic nvinfer supports a custom neural network, and according to the documentation here the plugin can attach raw output tensor data as metadata of type NvDsInferTensorMeta.
Still though I have trouble figuring out how to access the actual output of my network, which would be the depth map. Is there a concrete example that could help?
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) • DeepStream Version • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Thank you for your reply. I managed to get a good grasp at the structure of NvDsInferTensorMeta in order to parse the custom network’s output and found this pose estimation example more helpful to understand how to do so. Cheers!