How to run yolov4-tiny with mobilenet backend in deepstream?

I have trained a custom detector using TAO on the yolov4-tiny arch with mobilenet backend. Now I have etlt file and nvinfer_config.txt after exporting the model.
How can I run this model in deepstream application?
I am using Deepstream 6.0.
GPU RTX 1080

please refer to GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream, there is a sample to use yolov4-tiny to do detection.

@fanzh thanks for your repsonse. I sucessfully deployed my model to deepstream.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.