Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.2.5.1
• NVIDIA GPU Driver Version (valid for GPU only) 510
• Issue Type( questions, new requirements, bugs) question
Hi, I have read through deepstream_python_apps/apps
to find examples of extracting raw tensor_meta in numpy from nvinferserver
but I cannot find examples related to this problem. Acutually, I have set postprocess { other {} }
and output_control { output_tensor_meta: true }
in the config file; and the model config.pbtxt has config as follow:
platform: "onnxruntime_onnx" max_batch_size : 1 input [ { name: "input" data_type: TYPE_FP32 format: FORMAT_NCHW dims: [ 3, 1024, 1920 ] } ] output [ { name: "count" data_type: TYPE_FP32 dims: [ 1 ] } ]
Can you help me with this question? thank you