CUDA and OpenMP

I’m a new user of CUDA!!

I wrote a linear solver for the CPU and I parallelized it using OpenMP.
I wrote a version of my linear solver for the GPU using CUDA.

I need to solve two systems, Ax1 = b1 and Ax2 = b2.
I have an 8-Cores machine and my idea was to use 1-Core as Host for my CUDA solver and the others 7 Cores for solving the others system.
In this way I can use 7 parallel threads to solve the first system and simultaneously solve the second system on my GPU.

Unfortunately my application stops responding when I launch the computation and it looks like the CUDA solver doesn’t return the control to his host thread.

Is my idea feasible? Can I run in parallel my two solvers, one on the CPU and the other on the GPU?

Thanks in advance!!


Yes, it is feasible.
You just need to be careful about using the same thread to initialize and drive the GPU solver ( for example, you could use the master thread).


Thank you very much!! You’re perfectly right!!

In order to measure the performance I first allocate the memory both in the CPU and GPU and then I launch the multi-threaded computation… In this way nobody assures me that the thread that initializes and allocates the GPU is the same thread that drives my CUDA solver.

I just changed the code and works sweetly!!
Thank you!!