DIGITS with FP16 training

I have been trying to use NVCaffe’s FP16 support through DIGITS, however despite setting

default_forward_type: FLOAT16
default_backward_type: FLOAT16
default_forward_math: FLOAT16
default_backward_math: FLOAT16

the caffe_output.log states that is is using regular FLOAT instead.

I0706 15:49:26.557221   824 net.cpp:109] Using FLOAT as default forward math type
I0706 15:49:26.557232   824 net.cpp:115] Using FLOAT as default backward math type
I0706 15:49:26.557240   824 layer_factory.hpp:172] Creating layer 'train-data' of type 'Data'
I0706 15:49:26.557252   824 layer_factory.hpp:184] Layer's types are Ftype:FLOAT Btype:FLOAT Fmath:FLOAT Bmath:FLOAT

Is this supported in DIGITS at all? and if so what am I missing

There is an option, Blob format, in model to instruct NVCaffe to train in FLOAT16, in addition to the four options in your post.