is it possible to process complex python objects in parallel with a combination of the
Jetson TK1 and something like nVidia PyCUDA? I have multiple objects that (should) simultaneously wait for input-parameters to arrive and to be processed and passed to another object.
For me the main advantage of this is not only the faster processing time, but also (very important) to keep the right order of processing the arriving inputs etc.
I want to use this for my (yet) python neural network with a number x of neurons that are all (simulated is sufficient, as long as the order is correct) simultaneously active. They wait for input (constantly active, object-related method) and if something arrives they wait a certain period of time for other inputs that arrive at the same time slot and consider that in the processing for the spike (access object related variables to send output to connected axons from other neurons)…
The plan is to run one python-neuron-object per available CUDA-Core.
Is this somehow possible with the Jetson TK1?
Thank you for support :)