Tensor RT

hello,

I have a few 3d convolutional resnets that I have trained in keras. Each resnet has around 30m parameters. Are these models too big for a jetson nano? I would not think so because I saw a vgg 19 network run on the nano with 5 fps.

I froze the h5 model and converted it into a pb. The pb model is around 1.2 times faster. I am now trying to further optimize these models using tensorRT. I don’t know how to convert these pb files into uff files. When I tried, the script returns “converting conv3d and addv2 into custom layers”. Is this okay? Or do I need to utilize graphsurgeon and if so, how do I do that?

Also, I am running TensorRT on google colab. Here is how I download tensor RT

`import os
from termcolor import cprint

from google.colab import drive

drive.mount(“/content/drive/”)

!sudo dpkg -i ‘/content/drive/My Drive/tensorrt/nv-tensorrt-repo-ubuntu1804-cuda10.0-trt5.1.5.0-ga-20190427_1-1_amd64.deb’

!sudo apt-get install -y --no-install-recommends libnvinfer5=5.1.5-1+cuda10.0
!sudo apt-get install -y --no-install-recommends libnvinfer-dev=5.1.5-1+cuda10.0

!sudo apt-key add ‘/var/nv-tensorrt-repo-cuda10.0-trt5.1.5.0-ga-20190427/7fa2af80.pub’
!sudo apt-get update
!sudo apt-get install tensorrt

#!sudo apt-get install python3-libnvinfer-dev
!sudo apt-get install uff-converter-tf

!pip3 install pycuda

!cp -r /usr/src/tensorrt/samples/python/uff_ssd/plugin/ .
!cp -r /usr/src/tensorrt/samples/python/uff_ssd/CMakeLists.txt .
!mkdir build
os.chdir(“build”)
!cmake …
!make
os.chdir(“/content”)

cprint(“Finished install necessary packages, please restart the runtime now…”, “red”)`

How can I download later version of tensorRT on google colab with the uff module?

Hi @nagarwal2004,
UFF conversion to TRT has been deprecated from TRT>=7.
Hence the sugested flow is TF << ONNX << TRT or TF-TRT.
Also to download the latest TRT version, replace the deb with latest version

For queries related to Jetson Nano, suggest you to raise it on respective platform.

Thanks!