How to add custom post process after infer in deepstream python app

I trained a model with my own data, I see python demo deepstream-ssd-parser shows how to add custom post-processing function in triton-server, is there any way to add custom post-processing function in deepstream python without triton-server? How can I do it?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)

• DeepStream Version

• JetPack Version (valid for Jetson only)

• TensorRT Version

• NVIDIA GPU Driver Version (valid for GPU only)

• Issue Type( questions, new requirements, bugs)

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

• Hardware Platform : GPU

• DeepStream Version : 6.1.0

• TensorRT Version : 8.2

• NVIDIA GPU Driver Version : 460.27.04

• Issue Type : question

you don’t want to use Triton? please refer to sample deepstream_python_apps/ at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub , you can use nvinfer, it will not use Triton. and modify the configurations:
output-blob-names= #use the actual name
custom-lib-path=/opt/nvidia/deepstream/deepstream/lib/ is opensource,the path is opt\nvidia\deepstream\deepstream-6.1.1\sources\libs\nvdsinfer_customparser

Thank you for your reply.
I saw that the libnvds_infercustomparser. so was implemented using C++. But I want to know if there is a way to implement post-processing in Python language?

please refer to C sample deepstream-infer-tensor-meta-test, you can access output data NvDsInferLayerInfo by setting network-type to 100, then do post-processing in Python language.

Yes, that’s what I want, thank you very much. In addition, I want to know, compared with C++, does post-processing with python have any impact on performance?

Why do you want to implement post-processing in Python language? General speaking, C++ is more faster, you can add logs to monitor the time consumption.

Thanks for your answer. I just need to do some research on deepstream python programming, maybe our user can only programming in python,.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.