Run tlt inference on a video

Hi,
Can you help me with how to run tlt inference on a video instead of images?
I am able to do the inference on images, but I wanted to perform it on a saved video or a video stream, and also how to set the confidence threshold value in the inference_config file so that I can get the inference for my class based on the set confidence_threshold value.

You can export tlt model to etlt model and then deploy the etlt model in deepstream.
Reference:
https://docs.nvidia.com/tao/tao-toolkit/text/object_detection/yolo_v4.html#deploying-to-deepstream