Simulation engine based on CUDA Exploring a new(?) idea

Hi all,

I’m starting to evaluate the possibility to use the CUDA power for microcontroller simulation:
what I want is the possibility to use the parallelism provided by CUDA to simulate
n microcontroller execution of the same program in parallel.

I.e: I have 1 program, and I have to execute it many times with different input data:
is it possible to use CUDA to execute it in parallel on the different input data sets?

Any idea or reference of the possibility or impossibility of this approach?

thanks
giammy

Executing the same code on different sets of data is the heart of data-parallel programming, so of course this is possible! Start with the CUDA programming guide as the first reference.

The scenario i’m thinking is the following

1 - n different data

2 - 1 algorithm

3 - n different simulation of microcontroller running each 1 set of data.

The problem I see is that the algorithm evolution can depend on the input data,

so the n different simulation of microcontroller will be executing different parts

of the algorithm.

So, I will start executing the same instruction on different data, but I could

have interval of time in which I’m executing different instruction on different

data.

I should have some way to manage this behaviour!

thanks

giammy

Sure, you will probably use some efficiency to divergent warps. But the hardware is actually very efficient at handling this. Don’t discount an idea until you’ve tried it out and proven via benchmarking that the divergent warps are actually too costly.