CUDA NAMD two gpu error

Hi,
I pulled the NAMD image using nvidia docker and tried to run it with the below command
namd3 +ppn 18 +setcpuaffinity +idlepoll /apoa1/apoa1_nve_cuda_soa.namd

I have two gpus in my workstation, RTX5000 and it gave the below error, It works on single GPU fine. I want to use two GPUs to increase the performance. The error is same with or without NVLINK.

root@58e5b63afbf8:/apoa1# namd3 +ppn 18 +setcpuaffinity +idlepoll /apoa1/apoa1_nve_cuda_soa.namd
Charm++: standalone mode (not using charmrun)
Charm++> Running in Multicore mode: 18 threads (PEs)
Charm++> Using recursive bisection (scheme 3) for topology aware partitions
Converse/Charm++ Commit ID: v6.10.1-0-gcc60a79
Warning> Randomization of virtual memory (ASLR) is turned on in the kernel, thread migration may not work! Run ‘echo 0 > /proc/sys/kernel/randomize_va_space’ as root to disable it, or try running with ‘+isomalloc_sync’.
CharmLB> Load balancer assumes all CPUs are same.
Charm++> cpu affinity enabled.
Charm++> Running on 1 hosts (2 sockets x 10 cores x 2 PUs = 40-way SMP)
Charm++> cpu topology info is gathered in 0.001 seconds.
Info: Built with CUDA version 10020
Did not find +devices i,j,k,… argument, using all
Pe 11 physical rank 11 will use CUDA device of pe 16
Pe 2 physical rank 2 will use CUDA device of pe 8
Pe 9 physical rank 9 will use CUDA device of pe 16
Pe 0 physical rank 0 will use CUDA device of pe 8
Pe 6 physical rank 6 will use CUDA device of pe 8
Pe 7 physical rank 7 will use CUDA device of pe 8
Pe 1 physical rank 1 will use CUDA device of pe 8
Pe 10 physical rank 10 will use CUDA device of pe 16
Pe 14 physical rank 14 will use CUDA device of pe 16
Pe 12 physical rank 12 will use CUDA device of pe 16
Pe 4 physical rank 4 will use CUDA device of pe 8
Pe 15 physical rank 15 will use CUDA device of pe 16
Pe 13 physical rank 13 will use CUDA device of pe 16
Pe 17 physical rank 17 will use CUDA device of pe 16
Pe 3 physical rank 3 will use CUDA device of pe 8
Pe 5 physical rank 5 will use CUDA device of pe 8
Pe 8 physical rank 8 binding to CUDA device 0 on 58e5b63afbf8: ‘Quadro RTX 5000’ Mem: 15107MB Rev: 7.5 PCI: 0:17:0
Pe 16 physical rank 16 binding to CUDA device 1 on 58e5b63afbf8: ‘Quadro RTX 5000’ Mem: 15099MB Rev: 7.5 PCI: 0:73:0
Info: NAMD 3.0alpha3 for Linux-x86_64-multicore-CUDA
Info:
Info: Please visit http://www.ks.uiuc.edu/Research/namd/
Info: for updates, documentation, and support information.
Info:
Info: Please cite Phillips et al., J. Comp. Chem. 26:1781-1802 (2005)
Info: in all publications reporting results obtained with NAMD.
Info:
Info: Based on Charm++/Converse 61001 for multicore-linux-x86_64-iccstatic
Info: Built Wed Jun 24 21:38:42 CDT 2020 by jmaia on cairo.ks.uiuc.edu
Info: 1 NAMD 3.0alpha3 Linux-x86_64-multicore-CUDA 18 58e5b63afbf8 root
Info: Running on 18 processors, 1 nodes, 1 physical nodes.
Info: CPU topology information available.
Info: Charm++/Converse parallel runtime startup completed at 1.60279 s
Info: 0 MB of memory in use based on /proc/self/stat
Info: Using bitfields in atom data structures.
Info: sizeof( CompAtom ) = 32
Info: sizeof( CompAtomExt ) = 8
CkLoopLib is used in SMP with simple dynamic scheduling (converse-level notification)
Info: Configuration file is /apoa1/apoa1_nve_cuda_soa.namd
Info: Changed directory to /apoa1
TCL: Suspending until startup complete.
Warning: Disabling lonepair support due to incompatability with SOA.
Info: Using SOA integration routine
FATAL ERROR: CUDASOAintegrate does not support multiple devices currently.
You might not be specifying the +devices flag while trying to
run in a multi-gpu environment.
Please pick a single device using the +devices flag or
run your simulation without CUDASOAintegrateOn.
FATAL ERROR: CUDASOAintegrate does not support multiple devices currently.
You might not be specifying the +devices flag while trying to
run in a multi-gpu environment.
Please pick a single device using the +devices flag or
run your simulation without CUDASOAintegrateOn.
[Partition 0][Node 0] End of program
root@58e5b63afbf8:/apoa1# namd3 +ppn 18 +setcpuaffinity +idlepoll /apoa1/apoa1_nve_cuda_soa.namd

Thanks ,
Surya