DeepStream SDK: How to use NvDsInferNetworkInfo get network input shape in Python

Hi, my setup is:

• Hardware Platform (Jetson Nano)
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only) 4.4
• TensorRT Version 7.1.3

I use custom YOLOv3 ONNX model output tensor data via DeepStream(NvInfer), and during the process of post-process in Python, I want to get network input shape(not frame original shape).
So I’m follow example “deepstream-ssd-parser” in Line 275:
‘’’
#Boxes in the tensor meta should be in network resolution which is
#found in “tensor_meta.network_info”. Use this info to scale boxes to
#the input frame resolution.
‘’’
but I get this failed:
‘’’
Traceback (most recent call last):
File “deepstream_tensordata_wo_osd.py”, line 238, in pgie_src_pad_buffer_probe
print("**************************************************************************************", tensor_meta.network_info)
AttributeError: ‘pyds.NvDsInferTensorMeta’ object has no attribute ‘network_info’
‘’’

• I Wonder how to use “NvDsInferNetworkInfo” information in Python, if i want to scale boxes to the input frame resolution.

Hi,

Do you run the same source of deepstream_ssd_parser.py?
Or there are some updates applied and causes this error?

Thanks.

Hi,

I don’t use deepstream_ssd_parser.py, because I wouldn’t rather use Triton Inference Server.
When I run this example, also unable to create Nvinferserver.
So I modify deepstream_imagedata-multistream.py and successful execution (get actually tensor data).

Thanks.

Hi,

Excuse me, any reply for this?
I just want to know how to get network input size (height/width/channel) in Python.
I given fixed value for the present.

We apologise for the inconvenience we have brough to you. Your patience is very much appreciated.

Thanks.

Hi,

You should be able to get the layer info with a similar function in deepstream_ssd_parser.py.
Would you mind sharing your implementation so we can check it directly?

Thanks.

Hi,

I’m sorry for the late response.
I’m out of office for a week.
No, I will organize these implementation before send to your personal mail.

Thanks.

Hi,

I have sent my implementation as an attachment, please receive personal messages.

Thanks

Hi,

May I have any update on this?

Thanks.

Hi,

Thanks for sharing the source.

We are checking this internally.
Will share more information with you later.

Hi,

Thanks for the reply.
We look forward to receiving your reply.

Thanks.

Hi,

Thanks for your patience.

You will need the NvDsInferNetworkInfo to get the network input dimension.
But since it doesn’t be added into the pybinding, please do it manually.

Below is the detailed steps for your reference:

1. Setup

Please setup the device with JetPack4.6 + Deepstream 6.0

2. Get source

$ git clone https://github.com/NVIDIA-AI-IOT/deepstream_python_apps.git
$ git submodule update --init

Apply following change:

diff --git a/bindings/src/bindnvdsinfer.cpp b/bindings/src/bindnvdsinfer.cpp
index 23b1f46..23ae69d 100644
--- a/bindings/src/bindnvdsinfer.cpp
+++ b/bindings/src/bindnvdsinfer.cpp
@@ -199,6 +199,7 @@ namespace pydeepstream {
         py::class_<NvDsInferTensorMeta>(m, "NvDsInferTensorMeta",
                                         pydsdoc::NvInferDoc::NvDsInferTensorMetaDoc::descr)
                 .def(py::init<>())
+                .def_readonly("network_info", &NvDsInferTensorMeta::network_info)
                 .def_readonly("unique_id", &NvDsInferTensorMeta::unique_id)
                 .def_readonly("num_output_layers",
                               &NvDsInferTensorMeta::num_output_layers)

3. Build and Install

$ sudo apt install -y git python-dev python3 python3-pip python3.6-dev python3.8-dev cmake g++ build-essential libglib2.0-dev libglib2.0-dev-bin python-gi-dev libtool m4 autoconf automake libgirepository1.0-dev libcairo2-dev pkg-config
$ pip3 install pycairo
$ cd {deepstream_python_apps}/bindings/
$ mkdir build
$ cd build/
$ cmake ..  -DPYTHON_MAJOR_VERSION=3 -DPYTHON_MINOR_VERSION=6 -DPIP_PLATFORM=linux_aarch64 -DDS_PATH=/opt/nvidia/deepstream/deepstream
$ make
$ pip3 install ./pyds-1.1.0-py3-none-linux_aarch64.whl

4. Get the network input dimension

...
tensor_meta = pyds.NvDsInferTensorMeta.cast(user_meta.user_meta_data)
network_info = tensor_meta.network_info
print("Network Input : w=%d, h=%d, c=%d"%(network_info.width, network_info.height, network_info.channels))
...
...
Network Input : w=416, h=416, c=3

Thanks.