I work for a Computer Numeric Control (CNC) company who manufacture router tables and waterjets, AXYZ Automation Group (AAG). Our current motion controller, A2MC, uses a Field Programmable Gate Array (FPGA) to output step and direction for the stepper motors that control the machine. I am curious if NVIDIA’s hardware/software solution could prevent the need for the FPGA in our hardware. The FPGA was selected in 2008 because it was able to solve the standard formula for motion deterministically at 5.125MHz.
Can NVIDIA’s hardware running Cuda code be able to solve d = (j * t^3) / (6 + [a * t^2]/[v * t]) at a minimum of 1Mhz (but ideally can run significantly faster) to 9 integrators with +/-10 nanoseconds of jitter (discrepancies in the determinism) at a deterministic rate of 100microseconds.
d = distance, in millimeters (mm), on the FPGA on the A2MC this unit was Smidge, which is a very small unit
t = time, in milliseconds (ms), on the FPGA on the A2MC this unit is Ticks where there are 4,150,000 ticks in a second
v = velocity, in millimeters per milliseconds (mm/ms)
a = acceleration, in millimeters per milliseconds squared (mm/ms^2)
j = jerk, in millimeters per milliseconds cubed (mm/ms^3)
While an FPGA, GPU, and CPU are fundamentally different, in 2024, can we solve the standard formula for motion deterministically using NVIDIA’s hardware stack (CPU and GPU)? Theoretically, if we can do this, we should be able to create a functional digital twin for our CNC machines. The value of this cannot be understated; addressing bugs reported from customers, developing/testing new features, and tailored machines could be achieved without hardware. On the A2MC we can only reasonably achieve the aforementioned with fully functional machines. Buying into NVIDIA’s hardware stack promises other foundational improvements to our machines, such as deep learning vision models, and predictive maintenance.