Issue with adding custom preprocessing step

Using Deepstream 5.0 App with backend triton inference server using samples from "deepstream-app-trtis ". Need to add custom preprocessing steps and post-processing steps other than the ones defined in PreProcessParams and PostProcessParams.Not being able to find proper examples.
Also to fetch real-time metadata from model outputs to create custom algorithms in order to fetch insights out of data. Please suggest some approach and share some examples.

1 Like

Nvinferserver cannot support customized preprocess, however triton(infersever based on low level Triton lib) can support customized preprocess using https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/models_and_schedulers.html#ensemble-models , you can refer DeepStream 5.0 nvinferserver how to use upstream tensor meta as a model input

For postprocess you can refer deepstream-ssd-parser.

We would like to have flexibility over data flow between models and able to configure it as per need. Any estimate on this when it would be supported?

Elaborating on this we want to use detection model output boxes in our custom tracking algorithm. Please suggest the best way to achieve this using Triton Server.

Had updated my previous comment, pls check it.
In additon, refer https://github.com/triton-inference-server/server/tree/r20.03/src/custom/identity for custom triton backend.