[python] Extract raw `output_tensor_meta` from nvinferserver[triton] in numpy

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only) 510
• Issue Type( questions, new requirements, bugs) question
Hi, I have read through deepstream_python_apps/apps to find examples of extracting raw tensor_meta in numpy from nvinferserver but I cannot find examples related to this problem. Acutually, I have set postprocess { other {} } and output_control { output_tensor_meta: true } in the config file; and the model config.pbtxt has config as follow:
platform: "onnxruntime_onnx" max_batch_size : 1 input [ { name: "input" data_type: TYPE_FP32 format: FORMAT_NCHW dims: [ 3, 1024, 1920 ] } ] output [ { name: "count" data_type: TYPE_FP32 dims: [ 1 ] } ]
Can you help me with this question? thank you

please refer to deepstream-ssd-parser, it is a sample to parse output tensor.
configure file is dstest_ssd_nopostprocess.txt ,code link: GitHub - NVIDIA-AI-IOT/deepstream_python_apps: DeepStream SDK Python bindings and sample applications
here is a topic about numpy: * URGENT * How to convert Deepstream tensor to Numpy?

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.