Jetson TX-1 & 16-bit Floating point operations

Hello,

We currently use CUDA 7.0 on Jetson TX1 for our application running 32-bit operations.

  1. Is it possible to convert the same to 16-bit Floating point without going to CUDA 7.5?
    I read in one of the posts that some features were backported to CUDA 7.0. Is this one of them?

  2. If not, how do we get 7.5 Toolkit to run on Ubuntu on Jetson TX1?

Thank you!

Yes, FP16 is supported in JTX1 version of CUDA 7.0.
Also you will want to compile with SM_53.