Development best practices for tensorRT on Tx1 and Ubuntu18 desktop

Hi
I’m using Ubuntu 18 desktop with Nvidia graphics card for development. While in production we are running on TX1 Ubuntu 16 that is installed using jet pack.
Ubuntu 18 comes with Nvidia driver 390. Where we can install cuda toolkit 10 and tensorRT 5.
On TX1 these versions are not supported as far as I read, only TRT 3 or 4 with cuda 9.
I hope you can clarify this issue for me:

  1. If we keep the version difference. Is TRT 5 backward compatible with TRT 3 or 4?
  2. What is the recommended versions to use assuming we already have this HW.
  3. We considered using dockers on our development machines. Can you explain how to configure a cross compilation env where we can build on development machine and run on TX1, while be able to remote debug.

Thanks a lot
Tal

Hi,

1. If we keep the version difference. Is TRT 5 backward compatible with TRT 3 or 4?
In general, you will train a model on desktop and run it on the Jetson.
TensorRT version doesn’t matter since the shared file is a seriazed model (ex. .caffemodel or .pb).

But if you want to share the same program, it’s recommended to check the API changes in TensorRT 5.0 (most in python).

2. What is the recommended versions to use assuming we already have this HW.
You don’t need to install TensorRT on the host if it’s just for training.

4. We considered using dockers on our development machines. Can you explain how to configure a cross compilation env where we can build on development machine and run on TX1, while be able to remote debug.
We don’t recommended to use docker on Jetson due to lots of aarch64 issues.

If you still want to do so, you can check this GitHub for information.
You will need to create two docker img with same software version(ex. TF) but different hardware setting.

Thanks.

Hi,
Let me clarify the questions as I think there is misunderstanding.
We are using ubuntu laptop as our development environment with nvidia GPU. This is the machine where we would like to build, test and hopefully run tensortRT.
Our TX1 is used as a “production environment” where we would like to deploy our build and run system tests.

Is that a reasonable setup?

I know that tensorRT can run a model that was built using tensor flow or caffe using the appropriate parser.
Assuming that we already have a model, if we develop the inference part using tensorRT 5
(on a laptop) using the C++ APIs.

Can we assume it will work the same on TX1 using TRT 4?

Thanks
Tal

Hi,

The code should be similar between desktop and Jetson.
But please remember that you will need to create the TensorRT PLAN file on both environments separately.
Since TensorRT optimizes the implementation based GPU architecture. You cannot use PLAN cross different GPUs.

Thanks.

Hi,

Can you please refer to version compatibility of TRT?

Thanks
Tal

The same version will be better.
So you can try TensorRT 4.0.

Thanks.