pre-process → onnx inference with onnx backend → post-processing
for most of the model, we can just use the onnx backend in Triton server for the inference
DeepStream nvinferser includes some pre-process.
for post-processing, it’s normally model specific
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks