I’m fascinated by the recent popular StyleGAN AI-face generation technology ( https://github.com/NVlabs/stylegan ). I have a Threadripper 1950x (3.9 ghz, 16 core, 32 thread) box, but no CUDA hardware at present, and for reasons not pertinent to the discussion it’s not a practical option for some time.
While hardware is clearly the ideal solution, for at least initial dev/testing purposes emulation is fine in my case, and that’s flexibility I’d like to have anyways. A perfomance hit of 10-100x is acceptable, somewhat mitigated by this high-end CPU.
I suspect the solution is configuring my Win10 environment and CUDA toolkit such that it finds an emulated GPU:
InvalidArgumentError (see above for traceback): Cannot assign a device for operation Gs/_Run/Gs/latents_in: node Gs/_Run/Gs/latents_in (defined at C:\dev\StyleGAN\dnnlib\tflib\network.py:218) was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0 ]. Make sure the device specification refers to a valid device. The requested device appears to be a GPU, but CUDA is not enabled.
[[node Gs/_Run/Gs/latents_in (defined at C:\dev\StyleGAN\dnnlib\tflib\network.py:218) ]]
The answer may be staring me in the face, but I felt this was likely to be a common question due to what appears to be emerging popularity of StyleGAN.