Sometimes I forward X in order to get a larger screen. For example, if I have a small monitor on one Linux system, but am working from a Linux system having a larger monitor, then I simply use “
ssh -Y”. This requires an X server, and if the app uses the GPU (e.g., CUDA or OpeGL), then the laptop would require the use of hardware capable of this (e.g., an NVIDIA video card). But if you don’t do this, then you wouldn’t need an NVIDIA GPU. Flashing is an example of where the host PC does not require such a GPU.
I think anything with 8GB will be good to go, though 16GB might be all you would ever need even if compiling kernels. You don’t need to build kernels for flash and install, although there might be times when it is useful for adding a driver which wasn’t there. If you compile using 4 cores it takes far less RAM than if you compile with 16 cores. If you were training models, maybe you’d want 64GB and a very fast GPU, but mostly for what you are talking about 16GB would be all you’d ever need for years, and 8 GB would not be much of a sacrifice.
There really isn’t much need for a fast CPU or lots of cores on a machine used for flash. More cores can build kernels faster, but unless you work on kernels, then it won’t matter. Even if you do work on kernels, then it usually isn’t that hard to wait for 6 cores to finish and not that urgent to run 16 cores.
Hard drive space will by far be the one spec you have to have enough of.