I trained many networks in CLARA before with <200MB volumes. These volumes are ~350 voxels cubed, but I always trained by feeding only patches into the network due to VRAM limitations.
Now, I want to train the network on much larger volumes (almost 10GB each, around 2000 voxels cubed- of course, I will still be using patching). When I try to start this training the first iteration never runs. System RAM maxes out over ~10 mins and then it crashes with a “Killed” message.
So my question is: Is clara trying to fit whole volumes into RAM? I am not sure what is going on at this stage of the network prepping to train that would cause the crash.
edit: It begins training with just one 10 GB scan in train/validation sets, but if I have multiple 10GB scans in each set it crashes before the first iteration. Why would clara be trying to put my entire dataset into RAM before training?