Need tao_triton video inference

@Morganh

I think @h9945394143 is trying to convey that tao nvinfer is working.
but when tried with nvinferserver with the same BBoxCustomParse function with the BBox parsing .so lib file, it’s not working.

  1. GitHub - NVIDIA-AI-IOT/yolo_deepstream: yolo model qat and deploy with deepstream&tensorrt

custom_parse_bbox_func: “NvDsInferParseCustomYoloV4”

custom_lib_path:“./deepstream_yolov4/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so”

  1. Deepstream-tao-apps

custom_parse_bbox_func: NvDsInferParseCustomBatchedNMSTLT

path:“/app/sriharsha/deepstream_tao_apps/post_processor/libnvds_infercustomparser_tao.so”

with this one no output is coming through nvinferserver.

like this:

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

For deeptream, TAO user guide officially mentions that deploying models via GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream. It is the github we recommend.
So, if end users use GitHub - NVIDIA-AI-IOT/deepstream_python_apps: DeepStream SDK Python bindings and sample applications instead, we do not know its status since we do not maintain/support it and also there is not any QA cycle to check if tao models can work on it.

For video inference, GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream can support it for sure. It can run inference against h264 video files.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.