We have a workstation with 12 CPU’s and 4 GPU’s (4 Tesla C2070) and we are trying to run NAMD simulations in it. We are very excited of this new addition to our tools, and we wanted to test it and compare its performance, but we are not sure we are doing it correctly
Which is the correct command to run the NAMD in the workstation?
We are currently using one similar to this:
charmrun ++local +p4 namd2 +idlepoll [configuration file] > [log file]
Here is the first part of a log file using the workstation.
Charm++> scheduler running in netpoll mode.
Charm++> Running on 1 unique compute nodes (12-way SMP).
Charm++> cpu topology info is gathered in 0.007 seconds.
Info: NAMD 2.8 for Linux-x86_64-CUDA
Info: Please visit http://www.ks.uiuc.edu/Research/namd/
Info: for updates, documentation, and support information.
Info: Please cite Phillips et al., J. Comp. Chem. 26:1781-1802 (2005)
Info: in all publications reporting results obtained with NAMD.
Info: Based on Charm++/Converse 60303 for net-linux-x86_64-iccstatic
Info: Built Sat May 28 11:30:15 CDT 2011 by jim on larissa.ks.uiuc.edu
Info: 1 NAMD 2.8 Linux-x86_64-CUDA 4 cabeza.compbiophyslab.com Guest
Info: Running on 4 processors, 4 nodes, 1 physical nodes.
Info: CPU topology information available.
Info: Charm++/Converse parallel runtime startup completed at 0.00935698 s
Pe 3 physical rank 3 binding to CUDA device 3 on cabeza.compbiophyslab.com: ‘Tesla C2070’ Mem: 4095MB Rev: 2.0
Pe 1 physical rank 1 binding to CUDA device 1 on cabeza.compbiophyslab.com: ‘Tesla C2070’ Mem: 4095MB Rev: 2.0
Did not find +devices i,j,k,… argument, using all
Pe 0 physical rank 0 binding to CUDA device 0 on cabeza.compbiophyslab.com: ‘Tesla C2070’ Mem: 4095MB Rev: 2.0
Pe 2 physical rank 2 binding to CUDA device 2 on cabeza.compbiophyslab.com: ‘Tesla C2070’ Mem: 4095MB Rev: 2.0
Info: 1.63564 MB of memory in use based on CmiMemoryUsage
Info: Configuration file is /home/Guest/Adrian/asyn/ProductionNa15/runNa15.namd
Info: Changed directory to /home/Guest/Adrian/asyn/ProductionNa15