I’m trying to convert a matrix full of hex values into a matrix of decimal values. That’s all well and good, but I need two intermediate temporary variables during the conversion to hold each of the digits in the hex values. I’m using CUDA to run all of the conversions on the matrix (theoretically) at the same time. So, can I have a pair of usual int type variables to hold the digits for each of the separate cells in the matrix or should I be using an intermediate matrix to hold that information? All of the examples that I have found convert directly from one matrix into another, so I’m not sure what the best approach is here.