I suppose the best way of finding out is just to try :-)
However, I notice that there is “jetson-inference” in the dusty-nv github.
It uses TensorRT.
I would prefer to re-use existing C++ code I already have that uses caffe.
I have a trained caffe model (from digits.)
I want to run real-time inference using this model on the Jetson.
I’m not afraid of C++.
Is there any reason why I should use jetson-inference/TensorRT, instead of nvcaffe with cuDNN?