Originally published at: Speeding Up Semantic Segmentation Using MATLAB Container from NVIDIA NGC | NVIDIA Technical Blog
Gone are the days of using a single GPU to train a deep learning model. With computationally intensive algorithms such as semantic segmentation, a single GPU can take days to optimize a model. But multi-GPU hardware is expensive, you say. Not any longer; NVIDIA multi-GPU hardware on cloud instances like the AWS P3 allow you to pay…