I have a question about using the CUBLAS library I am hoping someone can answer. I am using this library for a real-time system and its processing time is great! (my kernel is taking about 1ms to complete in what will take 2-3min in Matlab). The main delay seems to be sending the data to device. I understand that data transfer between the CPU and the device is slow, but I have a 1024x1024 array of floats and the cublasGetVector() call is taking about 28ms.
Does this time sounds correct?
Is there anyway this can be improved.?
Using a ASUS Commando motherboard with a GTX275 board. Are the Tesla boards quicker for memory transfer?
Thanks for your help!
That is 102410244/28e-3 = 149.6Mb/s, which is improbably low for a full 16 lane PCI-e 1.0 slot. You can confirm the pinned and pageable bandwidth performance of your card/motherboard with the SDK bandwidth test, and I am guessing you will get numbers at least 10x the cublasGetVector() throughput you are quoting. But I have a feeling that what is really happening is that your timings are wrong. Your kernel is taking much longer than you think it is, and what you are attributing to memory copy time is really kernel running time (remember that all kernel launches, including cublas, are asynchronous to the host).
Your GTX-275 should be about as good as it gets in host-device bandwidth. No current Tesla will be any faster.
abs_complex<<< grid2, threads2 >>>(d_image_buff,d_result_buff,COLUMNS);
// stop and destroy timer
cutilCheckError(cutStopTimer(timer));
printf("Processing time: %f (ms) \n", cutGetTimerValue(timer));
cutilCheckError(cutDeleteTimer(timer));
// Getting result back to create the complex image matrix
status = cublasGetVector(lSize, sizeof(d_image_buff[0]), d_image_buff, 1, h_image_buff, 1);
if (status != CUBLAS_STATUS_SUCCESS) {return ERR_CUDA_CUBLAS;}
[/codebox]
When I run the code as displayed about I get around 0.6ms - 1ms.
However, when I stop the timer after the cublasGetVector() call I get around 30ms. WHen I put the timer only around the cublasGetVector() cdall, I get around 29ms.
Is this the correct way of measuring the time?
Is there a way to ensure that all kernels have completed execution? (can I use _syncthreads() outside a kernel?)
I put this code into a DLL and are using it for my realtime system (using Labview for acquisition) but I am having problems. Sometimes the cublasSetVector() call will just fail and I am assuming it is because it has not completed other kernels that are using this data. It would be great if someone could give me a hint/comment about how to ensure that the kernels have completed execution.
That is very slow. For a 16 lane PCI-e v1, I would expect something closer to 2Gb/s. Is the CUDA card in the 16 lane or 4 lane x16 slot?
There is a function cudaThreadSynchronize() which can and should use for timing kernels or asynchronous operations (this includes cublas functions, but not copy operations). So to time a kernel execution you should do something like this (in psuedocode):
That should ensure that the host blocks until the kernel finishes and your timing is correct. I didn’t read your code, so I am not sure whether you are timing correctly or not. I find those scrolling code boxes to be intolerably hard to read, it is like trying to read a newspaper through a letterbox slot…