Conflicting info in documentation of TLT 3.0 for running yolov4 with DS 5.1

I use dGPU and have DS 5.1, CUDA 11.1, CUDNN 8.0, and TRT 7.2 installed and I want to run yolov4 trained by TLT 3.0.

This compatible table state that I will need to use TRT 7.1 instead of TRT 7.2 and the TRT-OSS is required.

However, in the latest commit for deepstream_tlt_apps, YOLOV4 is removed from the list of model that requires TRT-OSS to run.

So my questions are:

  1. Do I need TRT-OSS to run yolov4 in DS 5.1 or not?
  2. If I need TRT-OSS then which version do I install 7.0, 7.1 or 7.2?

Regarding the use of tlt-converter, from the documentation

For deployment platforms with an x86-based CPU and discrete GPUs, the tlt-converter is distributed within the TLT docker. Therefore, we suggest using the docker to generate the engine.

But I can also just down the tlt-converter from here.

  1. Should I use the tlt-converter in the docker or the standalone download?
  1. Yes, it is needed. See YOLOv4 — Transfer Learning Toolkit 3.0 documentation or GitHub - NVIDIA-AI-IOT/deepstream_tlt_apps: Sample apps to demonstrate how to deploy models trained with TLT on DeepStream . I will sync with the owner about the Add support for TLT 3.0GA models · NVIDIA-AI-IOT/deepstream_tlt_apps@57a343c · GitHub

  2. Please follow YOLOv4 — Transfer Learning Toolkit 3.0 documentation

  3. See YOLOv4 — Transfer Learning Toolkit 3.0 documentation,
    the default TLT package includes the tlt-converter built for TensorRT 7.2 with CUDA 11.1 and CUDNN 8.0. However, for any other version of CUDA and TensorRT, please refer to the overview section for download.