How to test Tensor Cores and Mixed-Precision on RTX 2070 ?

Hello everyone, my first post in this community !

I’m currently in the Fast.ai mooc “Deep Learning for Coders”, using Fastai library built on PyTorch with Ubuntu 16.04 and 410.73 drivers.

I just got an RTX 2070, beside a 1080Ti, and try to use the mixed-precision in the Jupyter Notebook but it keeps crashing with little error messages.
So I’d like to check first if the 2070 is capable for using its Tensor Cores with Fp16.

I tried to run the script presented by Christian Sarofeen in http://on-demand.gputechconf.com/gtc/2018/video/S81012/ (around 13:40) but it crashes too.

import torch
import torch.nn

bsz, inf, outf = 256, 1024, 2048

tensor = torch.randn(bsz, inf).cuda().half()
layer = torch.nn.Linear(inf, outf).cuda().half()
layer(tensor)

The errors are:

eric@eric-MS-7A33:~/Link_fastaiV1$ nvprof python tensorcore_test.py 
==20569== NVPROF is profiling process 20569, command: python tensorcore_test.py
==20569== Profiling application: python tensorcore_test.py
==20569== Profiling result:
No kernels were profiled.

==20569== API calls:
No API activities were profiled.
==20569== Warning: Some profiling data are not recorded. Make sure cudaProfilerStop() or cuProfilerStop() is called before application exit to flush profile data.
======== Error: Application received signal 139

Can you help me fix it ?

Or maybe there is another generic script to check for mixed-precision compute ?

Many thanks,

EricPB