Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
x86 Quadro RTX
• DeepStream Version
5.1
• TensorRT Version
7.2.1-1
• NVIDIA GPU Driver Version (valid for GPU only)
460.56
• Issue Type( questions, new requirements, bugs)
Question
So far I have been using the basic deepstream-app sample for tasks such as object detection and classification. I just modify the configuration files for GIE and the top-level file and that works.
I’m using Triton Inference Server.
Now, I need to parse the output layers of a custom neural network. The output information is a cloud of XY points, for the moment I just want to keep it simple.
I want to write my own custom parser in C++ to grab the points in the output layer. For now, it will be ok to print out that I can successfully read the points.
Ideally, later I will send the points array to a server, but first I want to know which library files and functions have to be customized.
Thanks.
Hi,
You can dump the layer raw output, and customize your parser. by enable fields in configuration, please refer to
Gst-nvinferserver — DeepStream 5.1 Release documentation (nvidia.com)
after that, you can add your parser in function pgie_pad_buffer_probe sources/apps/sample_apps/deepstream_infer_tensor_meta-test.cpp
/* Parse output tensor and fill detection results into objectList. */
add your parser here.
later I will send the points array to a server, but first I want to know which library files and functions have to be customized.
→ You can refer to our broker, this plugin sends payload messages to the server using a specified communication protocol. It accepts any buffer that has NvDsPayload
metadata attached and uses the nvds_msgapi_*
interface to send the messages to the server. we support amqp, kafka, azure, redis protocl broker, you can choose based on your needs, you also can implement your own protocl broker.
details please see Gst-nvmsgbroker — DeepStream 6.1.1 Release documentation