- Number of slave nodes unknown at this time, but could be in the hundreds.
- Each slave node will run an autonomous version of a script written in Perl5, version 12.2 or higher.
- The program, on a slave node, will do some computations that are not CPU or GPU intensive, but which will produce a vector of data (numbers and some text).
- The master node will then automatically download, singly or in batches, the MySQL data tables for application of the deep learning program(s) from these vectors of data.
- The master node will need to leave the MySQL data table in place until predetermined stages of the processing are met. Duplication in downloaded data is not a problem, but switching out the programs on the slave nodes from one version to another, at the appropriate time, may be.
The programs on the slave nodes will execute in Perl, while the programs on the master node will run C++ and R programming language routines, all on GPUs, so the master node is best located within the NVIDIA network as well.
I would greatly appreciate any expertise on these matters. Particularly - (1) Perl5, version 12.2 or higher, can it be done and in which hardware environment, (2) suppose 1,000 to 10,000 or more individuals log into their respective slave nodes at one time to enter data. Will load balancing have to be pre-programmed, and/or does NVIDIA provide guidelines for that? Plus, any other answers (or further questions) are most welcome. Thanks in advance.