Parsing custom network output

Hello, I have a neural net that takes an input image and produces a depth map.
According to this topic nvinfer supports a custom neural network, and according to the documentation here the plugin can attach raw output tensor data as metadata of type NvDsInferTensorMeta.

Still though I have trouble figuring out how to access the actual output of my network, which would be the depth map. Is there a concrete example that could help?

Setup:
Jetson AGX Xavier
Deepstream 5.0

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

You can refer deepstream-infer-tensor-meta-test sample

Thank you for your reply. I managed to get a good grasp at the structure of NvDsInferTensorMeta in order to parse the custom network’s output and found this pose estimation example more helpful to understand how to do so. Cheers!

Great work.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.