Wrong / Missing output from Gst-nvinfer plugin

Please provide complete information as applicable to your setup.

Hardware Platform (Jetson / GPU): L4 GPU
DeepStream Version: 7.1
TensorRT Version: 10.3.0.26-1+cuda12.5 (CUDA Version: 12.6.77)
NVIDIA GPU Driver Version (valid for GPU only): 560.35.03
Issue Type( questions, new requirements, bugs): Bug

I have created a Deepstreamer pipeline with the last three elements in the pipeline being nvstreammux - nvinfer - appsink (and configured all relevant elements with batch-size=1).

The bug is that the buffers of the tensorrt output in the NvDsInferTensorMeta data are all NULL:
<capsule object NULL at 0x7f12c647cf30>

I assume I wrongly configured nvinfer but the documentation does not help me. My nvinfer config looks as follows:

[property]
gpu-id=0
batch-size=1
output-tensor-meta=1
gie-unique-id=1
network-type=100
raw-output-file-write=1

(where I left out the path to the onnx file).

When writing the output of tensorrt to a file this is correct.

How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing): If necessary I can provide a minimal code example with data.

what sample are you testing or referring to? what is the whole media pipeline? how do you know “tensorrt output in the NvDsInferTensorMeta data are all NULL”? could you share the test code?

It’s a custom pipeline with custom data unrelated to any application example. I will need to check internally if I can share code and the model…

In the meantime: I know that they are all NULL as I print the buffers and the dimensions of my three outputs’ NvDsInferLayerInfo’s (which I get using pyds.get_nvds_LayerInfo):

(10, 14, 4)
<capsule object NULL at 0x7f662e9e8f30>
(10, 14, 1)
<capsule object NULL at 0x7f662e9e8f30>
(10, 14, 8)
<capsule object NULL at 0x7f662e9e8f30>

noting that the three different buffers start at the same memory address.

I configured to write the binary outputs of the model to files (using raw-output-file-write=1 in the config) and those are correct.

You can recreate the bug with the files I have appended in the tar folder:

test_example1/nvinfer_config.txt
test_example1/resnet18_trafficcamnet_pruned.onnx
test_example1/test.py
test_example1/sample_720p.h264

You will need to change the path to the source file as well as to the nvinfer_confg file in test.py and add the path to the onnx file in nvinfer_confg.txt

The output I get looks like this:

`batch: 0
<pyds.NvDsInferTensorMeta object at 0x7f31998cf230>
(4, 34, 60)
<capsule object NULL at 0x7f3199937c90>
(16, 34, 60)
<capsule object NULL at 0x7f3199937c90>
batch: 1
<pyds.NvDsInferTensorMeta object at 0x7f31998ff1b0>
(4, 34, 60)
<capsule object NULL at 0x7f3199937c90>
(16, 34, 60)
<capsule object NULL at 0x7f3199937c90>`

test_example1.tar.gz (19.0 MB)

sorry for the late reply. nvinfer plugin low-level lib and python binding are opensource. if using using raw-output-file-write=1, nvinfer plugin will write info->buffer to the file. noticing the content are correct. so the C type pointer is not NULL. you print info->buffer in this binding code to make sure the pointer is not NULL. please refer to this doc for how to building binding code.

Ok, I see, I have been printing the pointer… my mistake!

Just one last question: Is there really just the function “get_detection” (deepstream_python_apps/bindings/src/bindfunctions.cpp at cb7fd9c8aa012178527e0cb84f91d1f5a0ad37ff · NVIDIA-AI-IOT/deepstream_python_apps · GitHub) to get data from inside the buffer of an NvDsInferLayerInfo object? I would have expected that there exists a function that returns the buffer at least as a 1-dimensional numpy array or even already reshaped to the actual dimensions…