Inferring detectnet_v2 .trt model in python

See Invalid device function error when export .tlt file to .etlt - #16 by Morganh , it is possible to build OSS inside the docker.

And inside the nvidia tlt container tensorrt version is 7.0.0.11, should i downgrade it for batchTilePlugin to support?

git clone -b release/7.0 GitHub - NVIDIA/TensorRT: NVIDIA® TensorRT™, an SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications. TensorRT && \