How to run caffe fp16?

I cloned caffe-0.16 from GitHub - NVIDIA/caffe: Caffe: a fast open framework for deep learning., edited Makefile.config to enable TEST_FP16, and successfully built it. If it matters, built it without CUDNN support.

Now how do I run fp16 mode in caffe?

The performance numbers I am getting when I run caffe with default options is 2X of what I would get earlier - earlier I ran the bvlc version (with no support for fp16)

Hi,

Do you want to inference your model on Jetson with TensorRT?
If yes, please use NvCaffe-0.15 since there are some compatibility issue on NvCaffe-0.16.

Usually, we train our model with fp32 precision and then inference with fp16 mode with TensorRT.
If you want a NvCaffe for Jetson with FP16 mode, please check here:

Thanks

Hi,

Thanks for your reply.
Yes, I want to run inference with fp16 on Jetson TX1.

To use NvCaffe-0.15, I cloned caffe-0.15 branch. But I don’t see fp16 support in this branch - there is no TEST_FP16 flag in Makefile.config.example

The link you posted it says clone experimental/fp16
I don’t see this branch anymore.

Thanks

Hi,

It’s recommended to use TensorRT rather than Caffe for fp16 mode on Jetson.
TensorRT can be installed via JetPack directly.

Thanks.

Hi, AastaLLL,

By your comments, could I suppose that:

  1. not stable support of FP16 for inference and traning in neither nvidia/caffe1.5 nor 1.6 branch ?
  2. TensorRT support FP16 inference ? I have not used TensorRT before, does it support FP16 training?

Thanks

Hi,

Usually, we don’t use FP16 mode for training but inferencing.

TensorRT support FP16 mode.
If you use Caffe framework for training, it should be simple to inference the model with TensorRT.
Here is our tutorial for your reference: GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.

Thanks.

Thanks for all the help. I am on vacation right now. Will come back and try TensorRT.

Happy holidays y’all