Jetson TX2 set up

Hi,

i am struggling to set up tensorrt on jetson tx2.

i can train model on DIGITS and run it using imagenet-camera provided in jetson-inference .

  • How to install tensorRT on jetson tx2 .(tried to install using debian package but getting errors )
  • which version of ubuntu is required( i am using 16.04 for jetpack 3.3)
    -cuda version required for jetson ( i am using cuda 9.0)
  • which version of tensorflow and opencv ?

Hello,

NVIDIA JetPack SDK is a comprehensive solution for building AI applications. TensorRT 4.0 is included with JetPack.

Please reference https://developer.nvidia.com/embedded/jetpack-notes for release details.

but while doing import tensorrt , it shows module not found ,am i doing something wrong

import tensorrt
Traceback (most recent call last):
File “”, line 1, in
ImportError: No module named tensorrt

Hello, the entire SDK package is flashed to Jetson by default (unless explicitly selected JetPack not to install TensorRT). However the library is called libnvinfer and the header is NvInfer.h.

Here’s a link to a code example using it.

thanks a lot i can run the sample classification model from pretrained model from tensorflow github , but any sample code example for running or test model trained by DIGITS tensorflow (as it does not have .pb file ) , i am bit confused as this trained model has to be converted to frozen graph and then trt_graph .