Hi,
-
Our deep learning solution is to use DIGITs for training on desktop and apply fast inference with tensorRT on tx1.
Currently, DIGITs supports caffe and torch while tensorRT only supports caffe.
https://developer.nvidia.com/embedded/twodaystoademo -
For model conversion, it’s recommended to ask caffe/tensorflow/torch/theano developer directly since they should know more about their own framework.
-
Beside tensorRT, some forum user has successfully built tensorflow r0.11 on tx1.
Please refer to https://github.com/tensorflow/tensorflow/issues/851