Concurrent DMA Transfers up, down along with kernel execution

I have a high performance computing application, but I do not need multiple “virtual GPUs”. Performance wise, a P2000 GPU has everything I need, except that I’m worried that it may not support simultaneous execution of DMA uplink of data from host to device memory, downlink from device to host, and kernel execution. Although my computational requirement is comfortably below 1 TOPS, I need this feature because of the memory bandwidth requirement for my application. Does the P2000 support this feature? The P2000 data sheet, which has only a few top level specs on it, does not tell me this. The Cuda C Reference seems to imply (section 3.2.5.4) that the answer depends on a property called asyncEngineCount, but I did not find anything on the Nvidia website that tells me what the asyncEngineCount for the P2000 is. I’m sure the P40 will do what I want, but it is massive overkill computationally. Please advise whether I can do concurrent DMA up, DMA down, and computation with the P2000?

Thank you

Hello, is there anybody out there who knows the answer to this? I am trying to find out if the Quaddro P2000 GPU supports concurrent task streams that can upload data from host memory, perform computation, and download data to host memory simulatneously?

Thank you

Here is the deviceQuery output from a Quadro P2000 on a 64-bit Windows 7 system:

deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "Quadro P2000"
  CUDA Driver Version / Runtime Version          9.1 / 8.0
  CUDA Capability Major/Minor version number:    6.1
  Total amount of global memory:                 5120 MBytes (5368709120 bytes)
  ( 8) Multiprocessors, (128) CUDA Cores/MP:     1024 CUDA Cores
  GPU Max Clock rate:                            1481 MHz (1.48 GHz)
  Memory Clock rate:                             3504 Mhz
  Memory Bus Width:                              160-bit
  L2 Cache Size:                                 1310720 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  CUDA Device Driver Mode (TCC or WDDM):         WDDM (Windows Display Driver Model)
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.1, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = Quadro P2000
Result = PASS