OCI runtime create failed

Hi I seem to getting the error below everytime I try and test docker, I have followed the instruction set out in this page and I am not sure if I am able to actually use my RT8000, it seem that I might have made a mistake somewhere any help is appreciated

 docker run -it --gpus all -p 8888:8888 tensorflow/tensorflow:latest-gpu-py3-jupyter
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:432: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: nvml error: driver not loaded\\\\n\\\"\"": unknown.
ERRO[0000] error waiting for container: context canceled

   >  Client: Docker Engine - Community
>  Version:           19.03.12
>  API version:       1.40
>  Go version:        go1.13.10
>  Git commit:        48a66213fe
>  Built:             Mon Jun 22 15:45:36 2020
>  OS/Arch:           linux/amd64
>  Experimental:      false
> Server: Docker Engine - Community
>  Engine:
>   Version:          19.03.12
>   API version:      1.40 (minimum version 1.12)
>   Go version:       go1.13.10
>   Git commit:       48a66213fe
>   Built:            Mon Jun 22 15:44:07 2020
>   OS/Arch:          linux/amd64
>   Experimental:     false
>  containerd:
>   Version:          1.2.13
>   GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
>  runc:
>   Version:          1.0.0-rc10
>   GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
>  docker-init:
>   Version:          0.18.0
>   GitCommit:        fec3683
>     docker info
> Client:
>  Debug Mode: false
> Server:
>  Containers: 26
>   Running: 0
>   Paused: 0
>   Stopped: 26
>  Images: 5
>  Server Version: 19.03.12
>  Storage Driver: overlay2
>   Backing Filesystem: extfs
>   Supports d_type: true
>   Native Overlay Diff: true
>  Logging Driver: json-file
>  Cgroup Driver: cgroupfs
>  Plugins:
>   Volume: local
>   Network: bridge host ipvlan macvlan null overlay
>   Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
>  Swarm: inactive
>  Runtimes: nvidia runc
>  Default Runtime: runc
>  Init Binary: docker-init
>  containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
>  runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
>  init version: fec3683
>  Security Options:
>   seccomp
>    Profile: default
>  Kernel Version: 4.19.121-microsoft-standard
>  Operating System: Ubuntu 18.04.4 LTS
>  OSType: linux
>  Architecture: x86_64
>  CPUs: 40
>  Total Memory: 100.4GiB
>  Name: PC12660
>  Docker Root Dir: /var/lib/docker
>  Debug Mode: false
>  Registry: https://index.docker.io/v1/
>  Labels:
>  Experimental: false
>  Insecure Registries:
>  Live Restore Enabled: false

DxDiag.txt (148.5 KB)

A clean re-install of Ubuntu on WSL2 seems to have sorted the problem with only

/sbin/ldconfig.real: /usr/lib/wsl/lib/libcuda.so.1 is not a symbolic link

Being a concern

However continuing with docker I got

Digest: sha256:aaca690913e7c35073df08519f437fa32d4df59a89ef1e012360fbec46524ec8
Status: Downloaded newer image for nvcr.io/nvidia/k8s/cuda-sample:nbody
Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
    -fullscreen       (run n-body simulation in fullscreen mode)
    -fp64             (use double precision floating point values for simulation)
    -hostmem          (stores simulation data in host memory)
    -benchmark        (run benchmark to measure performance)
    -numbodies=<N>    (number of bodies (>= 1) to run in simulation)
    -device=<d>       (where d=0,1,2.... for the CUDA device to use)
    -numdevices=<i>   (where i=(number of CUDA devices > 0) to use for simulation)
    -compare          (compares simulation results running once on the default GPU and once on the CPU)
    -cpu              (run n-body simulation on the CPU)
    -tipsy=<file.bin> (load a tipsy model file for simulation)

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

> Windowed mode
> Simulation data stored in video memory
> Single precision floating point simulation
> 1 Devices used for simulation
MapSMtoCores for SM 7.5 is undefined.  Default to use 64 Cores/SM
GPU Device 0: "Quadro RTX 8000" with compute capability 7.5

> Compute 7.5 CUDA device: [Quadro RTX 8000]
73728 bodies, total time for 10 iterations: 115.864 ms
= 469.157 billion interactions per second
= 9383.137 single-precision GFLOP/s at 20 flops per interaction

The biggest obstacle to his being a daily system to use is this issue 4197 The speeds on /mnt are very very slow

/sbin/ldconfig.real: /usr/lib/wsl/lib/libcuda.so.1 is not a symbolic link

This is a known issue but the warning is benign so you can safely ignore it.