Problem detection GPU in WSL2

I have followed the instructions for installing using GPUs on WSL2 (as given at CUDA on WSL :: CUDA Toolkit Documentation). The Blackscholes test seems to be detecting the GPU. But when I try to detect the GPU using Tensorflow, nvidia-smi or nvidia-docker2, I get an error. What am I doing wrong?

I am using Windows 10 insider preview.
Operating System: Windows 10 Pro Insider Preview 64-bit (10.0, Build 21343) (21343.rs_prerelease.210320-1757)

What am I doing wrong?

OUTPUT OF ./Blackscholes

[./BlackScholes] - Starting…
GPU Device 0: “Turing” with compute capability 7.5

Initializing data…
…allocating CPU memory for options.
…allocating GPU memory for options.
…generating input data in CPU mem.
…copying input data to GPU mem.
Data init done.

Executing Black-Scholes GPU kernel (512 iterations)…
Options count : 8000000
BlackScholesGPU() time : 0.273998 msec
Effective memory bandwidth: 291.972879 GB/s
Gigaoptions per second : 29.197288

BlackScholes, Throughput = 29.1973 GOptions/s, Time = 0.00027 s, Size = 8000000 options, NumDevsUsed = 1, Workgroup = 128

Reading back GPU results…
Checking the results…
…running CPU calculations.

Comparing the results…
L1 norm: 1.741792E-07
Max absolute error: 1.192093E-05

Shutting down…
…releasing GPU memory.
…releasing CPU memory.
Shutdown done.

[BlackScholes] - Test Summary

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

Test passed

OUTPUT OF N-body simulation CUDA sample docker image
Status: Downloaded newer image for
docker: Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: process_linux.go:495: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: requirement error: unsatisfied condition: cuda>=11.2, please update your driver to a newer version, or use an earlier cuda container: unknown.
ERRO[0020] error waiting for container: context canceled