Issue with adding custom preprocessing step

Using Deepstream 5.0 App with backend triton inference server using samples from "deepstream-app-trtis ". Need to add custom preprocessing steps and post-processing steps other than the ones defined in PreProcessParams and PostProcessParams.Not being able to find proper examples.
Also to fetch real-time metadata from model outputs to create custom algorithms in order to fetch insights out of data. Please suggest some approach and share some examples.

1 Like

Nvinferserver cannot support customized preprocess, however triton(infersever based on low level Triton lib) can support customized preprocess using Documentation - Latest Release :: NVIDIA Deep Learning Triton Inference Server Documentation , you can refer DeepStream 5.0 nvinferserver how to use upstream tensor meta as a model input - #5 by bcao

For postprocess you can refer deepstream-ssd-parser.

We would like to have flexibility over data flow between models and able to configure it as per need. Any estimate on this when it would be supported?

Elaborating on this we want to use detection model output boxes in our custom tracking algorithm. Please suggest the best way to achieve this using Triton Server.

Had updated my previous comment, pls check it.
In additon, refer server/src/custom/identity at r20.03 · triton-inference-server/server · GitHub for custom triton backend.