CUDA Containers do not work, yet torch detect GPU successfully

I install cuda successfully under wsl.

In torch, I can detect my GPU.

$ python
>>> import torch
>>> torch.cuda.get_device_name(0)
‘GeForce RTX 2080 Ti’

BlackScholes run smoothly, too, and it reports:

BlackScholes, Throughput = 51.2686 GOptions/s, Time = 0.00016 s, Size = 8000000 options, NumDevsUsed = 1, Workgroup = 128

However, docker reports:

$ docker run --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
docker: Error response from daemon: could not select device driver “” with capabilities: [[gpu]].
ERRO[0001] error waiting for container: context canceled

My docker version:

$ docker --version
Docker version 19.03.6, build 369ce74a3c

May I get some suggestions why it happens and how can I solve it?