My usage model:
Training phase: Train using Caffe, within the NVIDIA DIGITS environment.
Inference phase: Use the deploy.prototxt and .caffemodel file in a separate embedded environment (without Caffe)
Owing to the limited memory in the embedded environment, I would like the trained model to incorporate quantization. Is there a way to support quantization during training in DIGITS? DIGITS doesn’t allow me to hand-edit the solver.prototxt file, it appears.