Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
x86 Quadro RTX
• DeepStream Version
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
So far I have been using the basic deepstream-app sample for tasks such as object detection and classification. I just modify the configuration files for GIE and the top-level file and that works.
I’m using Triton Inference Server.
Now, I need to parse the output layers of a custom neural network. The output information is a cloud of XY points, for the moment I just want to keep it simple.
I want to write my own custom parser in C++ to grab the points in the output layer. For now, it will be ok to print out that I can successfully read the points.
Ideally, later I will send the points array to a server, but first I want to know which library files and functions have to be customized.