pre-process → onnx inference with onnx backend → post-processing
for most of the model, we can just use the onnx backend in Triton server for the inference
DeepStream nvinferser includes some pre-process.
for post-processing, it’s normally model specific