About the place where nvinfer plugin save inference result

Here is the structure of NvDsBatchMeta.
If we use object detection model, nvinfer seems to save inference result into obj_meta_pool, according to gstnvdsosd.c .
How nvinfer automatically save inference result into obj_meta_pool when we use object detection model?

Additionary, please tell me the place where nvinfer saves inference result If we use super resolution model.


The Gst-nvinfer — DeepStream 6.2 Release documentation is the video inference module which will save inference result into NvDsBatchMeta.
The attach_metadata_detector () function in /opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvinfer/gstnvinfer_meta_utils.cpp is the function which generates and attaches the detector’s output bboxes to NvDsBatchMeta.

There is no structure to save “super resolution model” output in current NvDsBatchMeta.

The gst-nvinfer is open source, please investigate the code and the document. It is unreasonable to explain the codes line by line in the forum.

Thank you for reply.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.