How to access NvInfer tesor-meta in python?

Hi,

I need to use the tensor output of nvinfer plugin in python code (preferably as np.ndarray).
based on deepstream_infer_tensor_meta_test.cpp’s pgie_pad_buffer_probe function, I wrote a python function that will get the tensor-meta memory address. Here is the function:

import pyds
import pycuda.driver as cuda
import ctypes

ctypes.pythonapi.PyCapsule_GetPointer.restype = ctypes.c_void_p
ctypes.pythonapi.PyCapsule_GetPointer.argtypes = [ctypes.py_object, ctypes.c_char_p]

def extract_tensor_meta(batch_meta: pyds.NvDsBatchMeta):
    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break
        l_user = frame_meta.frame_user_meta_list
        while l_user is not None:
            try:
                user_meta = pyds.NvDsUserMeta.cast(l_user.data)
            except StopIteration:
                break

            meta_type = user_meta.base_meta.meta_type
            if meta_type == pyds.NVDSINFER_TENSOR_OUTPUT_META:
                meta = pyds.NvDsInferTensorMeta.cast(user_meta.user_meta_data)
                for i in range(meta.num_output_layers):
                    mem_add = ctypes.pythonapi.PyCapsule_GetPointer(
                        meta.out_buf_ptrs_host, None)
                    cuda.memcpy_dtoh(mem_add + i, 0)

            try:
                l_user = l_user.next
            except StopIteration:
                break
        try:
            l_frame = l_frame.next
        except StopIteration:
            break

But the cuda.memcpy line throws

TypeError: a bytes-like object is required, not 'int'

Prior to converting the meta.out_buf_ptrs_host to memory address I simply tried

cuda.memcpy_dtoh(meta.out_buf_ptrs_host[i], meta.out_buf_ptrs_dev[i])

But I encountered:

TypeError: 'PyCapsule' object is not subscriptable

I also tried to convert the memory address into a python object by dereferencing the memory address with no success.

My setup:
Hardware Platform: GPU

DeepStream Version: 5.1
TensorRT Version: 7.2.1.6
NVIDIA GPU Driver Version: 460.73.01

Reference for pycuda.driver.memcpy: Device Interface - pycuda 2022.1 documentation

deepstream_infer_tensor_meta_test.cpp address:

/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-infer-tensor-meta-test/deepstream_infer_tensor_meta_test.cpp

Could you please help me convert the nvinfer tensor output into a python object?
Thanks a lot!

Okay, based on this thread’s solution, I managed to convert the tensor output to numpy array.

    frame_outputs = []
    for i in range(num_output_layers):
        layer = pyds.get_nvds_LayerInfo(meta, i)
        # Convert NvDsInferLayerInfo buffer to numpy array
        ptr = ctypes.cast(pyds.get_ptr(layer.buffer), ctypes.POINTER(ctypes.c_float))
        v = np.ctypeslib.as_array(ptr, shape=output_shapes[i])
        frame_outputs.append(v)

Glad to know issue resolved.