GPU Novice: Using CUDA Directly vs. Using Caffe

I am a GPU novice and just getting into this area for some projects. I will be using the TK1 and TX1 dev kits. My questions are:

  1. Is it advisable for a newbie like to me to get started directly with CUDA on these kits or should I try some standard implementations with Caffe first. My organization has a Convnet that I have to implement, first on a GPU and then on FPGA.

  2. If I am using only the standard layers and the network is also pretrained, then is there really any requirement for using CUDA directly? From what I have read, all standard layers can be implemented using Caffe and GoogleProtobuff without much hassle.

Eh, you might be better off asking this on the CUDA forum:

Some folks here are familiar with deep learning but you’d probably have more expertise there. FWIW, it seems like to me if Caffe does the job then you might as well stick to that and pick up whatever CUDA bits you might need on the way.