Hi!
I am trying to overlap a cublas GEMM call with D2H and H2D calls. I noticed that the GEMM kernel does not start executing until the H2D copy finishes. I think I avoided the common pitfalls of not running the async versions or not allocating pinned memory.
Here is the code:
const int n = 1000;
float* data_in_host;
float* data_in_device;
cudaHostAlloc( (void**)&data_in_host, n*n*sizeof(float), cudaHostAllocPortable);
cudaMalloc( (void**)&data_in_device, n*n*sizeof(float) );
float* data_out_host;
float* data_out_device;
cudaHostAlloc( (void**)&data_out_host, n*n*sizeof(float), cudaHostAllocPortable);
cudaMalloc( (void**)&data_out_device, n*n*sizeof(float) );
float* data_proc_A_device;
float* data_proc_B_device;
float* data_proc_C_device;
cudaMalloc( (void**)&data_proc_A_device, n*n*sizeof(float) );
cudaMalloc( (void**)&data_proc_B_device, n*n*sizeof(float) );
cudaMalloc( (void**)&data_proc_C_device, n*n*sizeof(float) );
cudaStream_t stream_in;
cudaStream_t stream_out;
cudaStream_t stream_proc;
cudaStreamCreate( &stream_in );
cudaStreamCreate( &stream_out );
cudaStreamCreate( &stream_proc );
cublasHandle_t handle;
cublasCreate( &handle );
cublasSetStream( handle, stream_proc );
float al = 1;
float be = 1;
// ====================================================
cudaMemcpyAsync( data_in_device, data_in_host, n*n*sizeof(float), cudaMemcpyHostToDevice, stream_in );
// cudaMemcpyAsync( data_out_host, data_out_device, n*n*sizeof(float), cudaMemcpyDeviceToHost, stream_out );
cublasSgemm( handle, CUBLAS_OP_N, CUBLAS_OP_N, n, n, n, &al, data_proc_A_device, n, data_proc_B_device, n, &be, data_proc_C_device, n );
// ====================================================
cublasDestroy( handle );
cudaStreamDestroy( stream_in );
cudaStreamDestroy( stream_out );
cudaStreamDestroy( stream_proc );
cudaFreeHost( data_in_host );
cudaFree( data_in_device );
cudaFreeHost( data_out_host );
cudaFree( data_out_device );
cudaFree( data_proc_A_device );
cudaFree( data_proc_B_device );
cudaFree( data_proc_C_device );
Interestingly, the GEMM execution only waits for the H2D transfer, and can successfully run parallel with a D2H copy.
Here is my devicequery output:
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 2 CUDA Capable device(s)
Device 0: "GeForce GTX 1080 Ti"
CUDA Driver Version / Runtime Version 9.1 / 9.0
CUDA Capability Major/Minor version number: 6.1
Total amount of global memory: 11178 MBytes (11721506816 bytes)
(28) Multiprocessors, (128) CUDA Cores/MP: 3584 CUDA Cores
GPU Max Clock rate: 1721 MHz (1.72 GHz)
Memory Clock rate: 5505 Mhz
Memory Bus Width: 352-bit
L2 Cache Size: 2883584 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Supports Cooperative Kernel Launch: Yes
Supports MultiDevice Co-op Kernel Launch: Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 8 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
Device 1: "GeForce GTX 1080 Ti"
CUDA Driver Version / Runtime Version 9.1 / 9.0
CUDA Capability Major/Minor version number: 6.1
Total amount of global memory: 11176 MBytes (11719409664 bytes)
(28) Multiprocessors, (128) CUDA Cores/MP: 3584 CUDA Cores
GPU Max Clock rate: 1721 MHz (1.72 GHz)
Memory Clock rate: 5505 Mhz
Memory Bus Width: 352-bit
L2 Cache Size: 2883584 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Supports Cooperative Kernel Launch: Yes
Supports MultiDevice Co-op Kernel Launch: Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 65 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
> Peer access from GeForce GTX 1080 Ti (GPU0) -> GeForce GTX 1080 Ti (GPU1) : Yes
> Peer access from GeForce GTX 1080 Ti (GPU1) -> GeForce GTX 1080 Ti (GPU0) : Yes