• Hardware Platform (Jetson / GPU): GPU
• DeepStream Version: 4.0.2
• TensorRT Version: 6.0.1-1+cuda10.2
• NVIDIA GPU Driver Version (valid for GPU only): 440.82
I’m trying to use some new Transfer Learning Toolkit models (TrafficNet, PeopleNet,…) for DeepStream 4.0.2. The documentation guided to use the tlt-converter
tool to export to the TensorRT Engine file.
The problem is that DeepStream 4.0.2 is built with TensorRT 6.0, however, Transfer Learning Toolkit containers downloaded from Nvidia NGC is built for TensorRT 7 (for version v2.0_dp_py2
) or TensorRT 5 (for version v1.0.1_py2
).
Why there is a gap in the TensorRT version between 2 consecutive builds?
Is there any way for me to get the tlt-converter for TensorRT 6?
@Morganh
I downloaded it and found out that this converter is for Jetson device.
This document captures simple instructions to run the TLT converter for the Jetson platform.
Where can I find the GPU version of it?
Can you try to run tlt-converter included in 1.0.1 docker? Is it successful?
The tlt-converter utility included in this docker only works for x86 devices, with discrete NVIDIA GPU’s.
I ran tlt-converter
include in 1.0.1 docker and it works.
But this docker is built with TensorRT 5, so the generated engine file only works for TensorRT 5. I knew that when using dpkg -l | grep TensorRT
The output is:
ii graphsurgeon-tf 5.1.5-1+cuda10.0 amd64 GraphSurgeon for TensorRT package
ii libnvinfer-dev 5.1.5-1+cuda10.0 amd64 TensorRT development libraries and headers
ii libnvinfer-samples 5.1.5-1+cuda10.0 all TensorRT samples and documentation
ii libnvinfer5 5.1.5-1+cuda10.0 amd64 TensorRT runtime libraries
ii python-libnvinfer 5.1.5-1+cuda10.0 amd64 Python bindings for TensorRT
ii tensorrt 5.1.5.0-1+cuda10.0 amd64 Meta package of TensorRT
ii uff-converter-tf 5.1.5-1+cuda10.0 amd64 UFF converter for TensorRT package
What i need is a tlt-converter built for TensorRT 6.
According to tlt user guide,
The TLT docker includes TensorRT version 5.1 for JetPack 4.2.2 and TensorRT version 6.0.1 for JetPack 4.2.3 / 4.3. In order to use the engine with a different minor version of TensorRT, copy the converter from /opt/nvidia/tools/tlt-converter to the target machine and follow the instructions for x86 to run it and generate a TensorRT engine.
Here is the result when I tried to run the tlt-converter
in x86 machine:
/lib/ld-linux-aarch64.so.1: No such file or directory
How can I fix this?
What command did you run?
Here it is ./tlt-converter
.
Please copy the converter from /opt/nvidia/tools/tlt-converter of your docker.
Your version is not correct, it is for Jetson platform.
@Morganh
I followed your guide to copy tlt-converter
from nvcr.io/nvidia/tlt-streamanalytics:v1.0.1_py2
container to TensorRT 6 docker container, run ./tlt-converter
and here is the error:
/opt/nvidia/tools/tlt-converter: error while loading shared libraries: libnvinfer.so.5: cannot open shared object file: No such file or directory
I think there is no tlt-converter
tool built for TensorRT 6. Can you give me a name of any image that had it?
OK, I will sync with internal team about your request.
1 Like
Hi @Morganh, is there any update from your team on my request?
Request is already sent to internal team. I’m pushing them to address it as soon as possible.
Sorry for inconvenient.
1 Like
Hi giangblackk,
For current official 2.0_dp release, the tlt-converter supports TensorRT7. So if you run tlt-converter in x86 host pc, would you please update to TRT7 and DS5 at your host x86 system accordingly?
This will unblock your case. And currently, we have not built a standalone version of tlt-converter for trt6 at x86 system.
If you run tlt-converter in Jetson platform, we support three kinds of tlt-converter.
https://developer.nvidia.com/tlt-converter-trt51
https://developer.nvidia.com/tlt-converter-trt60
https://developer.nvidia.com/tlt-converter-trt71
1 Like
It’s easier for me to choose DeepStream version now.
Thank you for your response.
1 Like