I am trying to figure out a way to use Clara 4.0 early access with as few code modifications as possible to use Clara v4.0 for the fastMRI varnet.
I can wrap the model, optimizer, and loss function from fastMRI as MONAI components. But, it is not straightforward with transforms because there is pre-processing and post-processing code which are outside the transforms, I do not want to reimplement all this code as MONAI transformers. Instead, I want to know if it would be possible to use the torch DataLoader directly in Clara’s train config. It would be a real-time saver.
fastMRI provide pytorch_lightning wrappers for torch implementation. It is easy to use lightning modules with MONAI but is it possible to use it with Clara? This will help with my research and also the federated learning project as we can wrap existing torch code with ease.