On Jetson TX2 with TensorRT3.0, tensorflow model 'inception-resnet-v2'

Is it possible to run very deep model like ‘inception-resnet-v2’ on Jetson TX2 using tensorflow library? I am planning to use TensorRT3.0 for inference. As the model is large, there could be memory issues, any inputs in this regard would be really helpful. Thanks.

Hi,

We recommend to use TensorRT for deep model on Jetson.

In TensorRT, we provide fp16 mode which can cut the required memory in half.
This will allow users to run a deeper model on a memory limited device, such as TX2.

Please find more detail about fp16 mode in our document:
[url]Developer Guide :: NVIDIA Deep Learning TensorRT Documentation

Thanks.

Thanks for quick response, appreciate it! But i have few more questions:

  1. As TensorRT can parse UFF (Universal Framework Format), can this model be used with Jetson-Inference code? Is this compatible? Reason: my previous code is using jetson-inference with *.caffemodel.

  2. Release note says following: “The Inception v4 network models are not supported with this Release Candidate
    with FP16 on V100.” Is this true for Jetson TX2 as well? Does that mean [url]https://github.com/tensorflow/models/tree/master/research/slim[/url](inception-resnet-v2) is also not supported?

Hi,

1. As you said, Jetson_Inference create TensorRT engine from caffe parser. There are two possible solution for UFF:

  • Modify Jetson_Inference to support UFF input referring to our native sampleUffMNIST sample
  • Serialize TensorRT engine from native sample, and launch Jeton_Inference with it directly.

2. Sorry for that we are not able to answer this question since not familiar with the architecture of inception v4.
You can check our supported layer in detail here:

Thanks