Tensorflow on DRIVE PX2 AUTOCHAUFFEUR

I am trying to run a tensorflow-based application (I’d prefer in docker, but it’s not strictly necessary) on a NVIDIA DRIVE PX2 (AUTOCHAUFFEUR).
I know that there are many threads about this topic, but I’ve found them a little confusing and would like to recap here the various solutions (if actually there is any)

  • Download a docker image with preinstalled tensorflow (document : `TENSORFLOW R1.0 DOCKER CONTAINER FOR DRIVE PX 2` ISSUE: the specific version of tensorflow wrapped within that image requires CUDA 8, while my PX2 OS version is 5.0.5, which comes with CUDA 8 QUESTIONS:
    1. Does this image require to be launched by means of `nvidia-docker`
    2. Is nvidia-docker available for aarch64 ?
    3. Is it possible do downgrade the cuda version or should I flash the drive with an older version of the OS ?
  • In some discussions is referred the the following topic: https://devtalk.nvidia.com/default/topic/1031300/jetson-tx2/tensorflow-1-7-wheel-with-jetpack-3-2- where it is possible to manually download a tensorflow wheel package QUESTIONS:
    1. Is this package providing GPU-accelerated computation (similarly to the `tensorflow-gpu` package on x86)?
    2. Is this package "official" to any extent ?
  • somewhere in the forum is also mentioned the following command `pip3 install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp33 tensorflow-gpu` ISSUE: this is not working on my system (can't find the package, it says) QUESTIONS:
    1. Is this package supposed to work on PX2? Seems to me that it is for a different platform, but so is the one mentioned within the previous bulletpoint, which is suggested by the nvidia forum moderators
    2. What's the difference respect to the package to be manually downloaded, described in the previous bullet-point ?
  • TensorRT, I couldn't understand exactly what this consists in and what it entails, any link/info would be appreciated. If I didn't get it wrong, it allows to convert a tensorflow model to a different (nvidia proprietary? ) standard: would that mean that all the python code built around the tensorflow model have to be replaced or the two would be somehow interchangeable ?

Could somebody address these doubts ?

Dear dario.turchi,

1-1,2 : It based on the docker as described in the document.
1-3 : We recommend refreshing older version.

2-1 : Yes.

3-1 : It’s built for TX2. So the dGPU won’t work. More, it is not working for the CUDA version different from JP3.3.
3-2 : It’s not big gap.

Regarding TensorRT, please refer to https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html
Webinar : Optimizing Self-Driving DNNs on NVIDIA DRIVE PX with TensorRT
http://info.nvidia.com/drivepx-tensorrt-reg-page.html