Post processor for tao -apps through TRITON

Hardware Platform (Jetson / GPU) T4
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 11.4
• NVIDIA GPU Driver Version (valid for GPU only)

Could you provide examples for PostProcessor of Triton Apps

What do you mean by PostProcessor? Does it mean nvdspostprocess plugin? Gst-nvdspostprocess (Alpha) — DeepStream 6.1 Release documentation

What does you mean by “Triton Apps”?

need parser for bbox

There are several ways. With gst-nvinfer, there are some default postprocessing algorithm inside, you can choose by settings Gst-nvinfer — DeepStream 6.1 Release documentation. If the default postprocessing can not meet your model’s requirement, you can customize your own postprocessing algorithm by parse customization.

The postprocess depends on the model. So what is your model? What is the output layers of your model and what is the postprocessing algorithm?

as of now i need parsers for tao -applications (tlt models converted to tenssorrt engine files ) .could you help us ?? and i use nvinferserver

Which TAO model? nvinferserver is just the same case as nvinfer. All nvinfer postprocess can be used with nvinferserver too if the model is the same.

yolov3 tao model

If you use TAO yolov3 model, the postprocess functions in /opt/nvidia/deepstream/deepstream/sources/objectDetector_Yolo can be used with nvinferserver too

can you help me with parser for yolov4 etlt model

There is already TAO yolov4 deepstream sample here: deepstream_reference_apps/deepstream_app_tao_configs at master · NVIDIA-AI-IOT/deepstream_reference_apps (github.com)

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.