Is it possible to make quick inference for Jetson TX1 directly by using the frameworks Tensorflow, Theano and Torch?

I’m working on Jetson TX1 for last 2 weeks. I have read many blogs and forms and understood that TensorRT requires .caffemodel inference file to optimize and make a plan. If we use Caffe, the inference file will be .caffemodel. I also found that we can convert Torch model to .caffemodel by using [url]https://zhanghang1989.github.io/Torch2CaffeConverter/[/url].

From this post [url]https://devtalk.nvidia.com/default/topic/981654/use-jetson-tx1-tensor-rt-to-run-my-tensorflow-model/[/url], I understood it is not possible to convert tensorflow models to caffemodel. Same is the case of theano. I want to know whether there is any other method for the conversion of theano/tensorflow models into caffe. Please give some clarity on this issue. Is caffe the only model which is supported by TensorRT right now?

I also need to know whether it is possible to deploy Tensorflow, Theano and Torch models directly into Jetson TX1 without being converted into .caffemodel?

Hi,

  1. Our deep learning solution is to use DIGITs for training on desktop and apply fast inference with tensorRT on tx1.
    Currently, DIGITs supports caffe and torch while tensorRT only supports caffe.
    https://developer.nvidia.com/embedded/twodaystoademo

  2. For model conversion, it’s recommended to ask caffe/tensorflow/torch/theano developer directly since they should know more about their own framework.

  3. Beside tensorRT, some forum user has successfully built tensorflow r0.11 on tx1.
    Please refer to https://github.com/tensorflow/tensorflow/issues/851