Does the pyds api for Deepstream 5 expose the raw tensor data of output layers?

I have a SGIE classifier of which I’d like to obtain the embedding from the last layer using the Python API of Deepstream 5.0.1. I run TensorRT 7.2.1, by requirement for this model.

Inside my application (where I already get the correct bbox from my TensorRT detector, the raw frames, etc), I execute

obj_usr_meta_list = obj_meta.obj_user_meta_list
        if obj_usr_meta_list:
            user_meta_list=pyds.NvDsUserMeta.cast(obj_usr_meta_list.data)
            user_meta_data = pyds.NvDsInferTensorMeta.cast(user_meta_list.user_meta_data)
            output_layers_info = user_meta_data.output_layers_info(0)
            print(output_layers_info.layerName)
            print(output_layers_info.buffer)
            print(output_layers_info)

Yields output
>>> correct_layer_name
>>> None
>>> <pyds.NvDsInferLayerInfo object at 0x7f83c818cf10>

It appears to me that my model executes correctly and I can’t think of any way of checking that it writes to this field.

I remember with the beta version that some structures that weren’t implemented would return None, is this what’s happening here, or is it my model that is wrong?

I also tried the out_buf_ptrs_host property of pyds.NvDsInferTensorMeta with the same result.

My nvinfer config
[property]
gpu-id=0
net-scale-factor=1
model-engine-file=model.engine
batch-size=1
network-mode=0
network-type=1
process-mode=2
model-color-format=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0
output-tensor-meta=1

Hi,

Have you tried check our python sample first?
It includes some examples for accessing tensor data:

https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/blob/2931f6b295b58aed15cb29074d13763c0f8d47be/apps/deepstream-ssd-parser/deepstream_ssd_parser.py#L273

 tensor_meta = pyds.NvDsInferTensorMeta.cast(user_meta.user_meta_data)

 # Boxes in the tensor meta should be in network resolution which is
 # found in tensor_meta.network_info. Use this info to scale boxes to
 # the input frame resolution.
 layers_info = []

 for i in range(tensor_meta.num_output_layers):
    layer = pyds.get_nvds_LayerInfo(tensor_meta, i)
    layers_info.append(layer)

Thanks.

Hi,

Thanks, I hadn’t seen that there was a Python example for this.
Using the pyds.get_nvds_LayerInfo function I could get it.
It looks like I’ve been able to access the value then like this:

output_layers_info = pyds.get_nvds_LayerInfo(user_meta_data, 0)
for i in range(embedding_size):                
       print(pyds.get_detections(output_layers_info.buffer, i)

Note that neither pyds.get_detctions or pyds.get_nvds_LayerInfo is mentioned in the documentation at
https://docs.nvidia.com/metropolis/deepstream/python-api/index.html

In order to access the whole embedding I had to get the pointer for output_layers_info.buffer and do
np.ctypeslib.as_array(ptr, shape=(embedding_size,)) but it would be ideal if there was a pyds method for this?

1 Like

Hi,

Sorry this is not available yet.
Currently, you still need to use the workaround to get the numpy output.

Here is a similar topic for your reference:

Thanks.

1 Like

Hi, I am able to get my output tensor embedding in python but how can i access that array or vector of shape 128 in c++?