I’m working on Jetson TX1 for last 2 weeks. I have read many blogs and forms and understood that TensorRT requires .caffemodel inference file to optimize and make a plan. If we use Caffe, the inference file will be .caffemodel. I also found that we can convert Torch model to .caffemodel by using [url]https://zhanghang1989.github.io/Torch2CaffeConverter/[/url].
From this post [url]https://devtalk.nvidia.com/default/topic/981654/use-jetson-tx1-tensor-rt-to-run-my-tensorflow-model/[/url], I understood it is not possible to convert tensorflow models to caffemodel. Same is the case of theano. I want to know whether there is any other method for the conversion of theano/tensorflow models into caffe. Please give some clarity on this issue. Is caffe the only model which is supported by TensorRT right now?
I also need to know whether it is possible to deploy Tensorflow, Theano and Torch models directly into Jetson TX1 without being converted into .caffemodel?