we’re currently researching NVIDIA GPUs with large memory capacities for real-time computer vision applications using CUDA. Does NVIDIA A16 behave like 4 individual GPUs? Or can it also be used as unified GPU? Can a single CUDA program access all 4x1280 (5120) CUDA cores and access 4x16 GB (64GB)?
According to the datasheet I understand that it is designed for virtualization tasks. Does this mean that it presents itself to the operating system as 4 separate GPUs?