TLT YOLOv3 and Deepstream 5.1: TensorRT library version mismatch

Setup:

• Hardware (T4/V100/Xavier/Nano/etc): Quadro RTX 5000
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc): Yolo_v3
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here): Docker tag: nvcr.io/nvidia/tlt-streamanalytics v3.0-py3
** Configuration of the TLT Instance**
** dockers: [‘nvidia/tlt-streamanalytics’, ‘nvidia/tlt-pytorch’]**
** format_version: 1.0**
** tlt_version: 3.0**
** published_date: 04/16/2021**
• Deepstream version: Docker nvcr.io/nvidia/deepstream 5.1-21.02-samples
• How to reproduce the issue ?

I’ve used the Transfer Learning Toolkit YOLOv3 Notebook (tlt_cv_samples_v1.1.0/yolo_v3/yolo_v3.ipynb) to create a TensorRT engine that I’d like to import into Deepstream. However, when I attempt to do so, I’m getting a TensorRT version mismatch error:

ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: INVALID_CONFIG: The engine plan file is not compatible with this version of TensorRT, expecting library version 7.2.2 got 7.2.1, please rebuild.
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: engine.cpp (1646) - Serialization Error in deserialize: 0 (Core engine deserialization failure)
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: INVALID_STATE: std::exception
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: INVALID_CONFIG: Deserialize the cuda engine failed.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1567 Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-5.1/samples/models/tlt_pretrained_models/yolov3_trt.engine

The command used to generate the engine using TLT is as follows:
!tlt tlt-converter -k $KEY
-p Input,1x3x640x640,8x3x640x640,16x3x640x640
-e $USER_EXPERIMENT_DIR/export/trt.engine
-t fp32
$USER_EXPERIMENT_DIR/export/yolov3_resnet18_epoch_$EPOCH.etlt

How can I (a) find an appropriate version of tlt-converter that uses TensorRT 7.2.2 or (b) use a different tool (trtexec?) to generate a compatible engine from my .etlt file?

If you run inference inside deepstream docker, please download tlt-converter inside deepstream docker and generate trt engine. TensorRT — Transfer Learning Toolkit 3.0 documentation