Federated learning

Hi,

How can I take help from radiologists to annotate the image during Federated learning flow. I mean, in case if the generated output is not as per expected, I want radiologist to annotate the image so that model will be trained with annotated images, and over time the model be robust

Hi
Thanks for your interest in clara train and FL
FYI Nvidia has just release NVFLARE as open source. this is the same framework clara train is using to run FL. it is open sourced and can be used with any framework not only clara train (monai)

To your question you can use NVFLARE to run scripts on clients to check the quality on the annotations as the distribution of classes, get statistics on the underlying images. This can get you to detect extrem errors as classes not in range-- EX: your classification problem is 5 classes. some sites labeled them as class 11 or did a zero based vs 1 based classes

If you are referring to the accuracy of the labels as the contour of the segmentation or if one site is misclassifying, I would say it would be very hard to detect. You may need to randomly select some cases and have a central site/ group of experts do quality control

Hope that helps

Thanks @aharouni for response. If I understand correctly, NVFlare can work only in classification use case. If we required segmentation, it may not be right choice

Also, after reading materials on federated learning, I found that only model weighs be send to core system for retraining purpose. In this case, how the aggregation of model weighs happen. What is best strategy to retrain the model at core system.

Hi
I think you misunderstood my response. NVFlare is a general open sourced framework for any collaborative workflow. it can be used for any machine learning problem, any deep learning (including classification, segmentation, NLP, genomics etc )

I am not sure I understand you question in NVFlare examples the weights are sent to the server for aggregation. I don’t get what you mean by

“What is best strategy to retrain the model at core system”

Thanks @aharouni for the response. What I mean by mentioning that NVFlare weighs agregation is, once weighs been send to central system, how these weights be added to central model. There are different ways we can handle this to avoid impact on outcome because of bias in data from any one institute. Can you please share best strategy for both classification and segmentation in medical world models.

Hi @karunakar.r,

I think based on the use case following learning algorithm and evaluation workflow can be used to avoid bias in data:

Thanks

Hi karunakar.r,

This is a good question. NVIDIA FLARE can be used to train any model. This is up to the researcher’s implementation, and the most efficient aggregation method may depend on the model type, classification vs. segmentation, etc.

The question of retraining based on new or updated annotations falls more in line with the capabilities of MONAI Label. In particular, with v3.0, there are examples of retraining a DeepGrow model with newly annotated data. See the “What’s New” section for more.

We currently don’t provide an example of MONAI Label integration with NVIDIA FLARE, but given that both are open source this should definitely be possible.

-Kris

Hi karunakar.r,

This is another good question. NVIDIA FLARE provides a few examples of different aggregation methods that may help avoid bias, e.g., in the case of heterogeneous data among different institutions.

Take a look at the cifar10 example in the NVIDIA FLARE github. This compares the FedAvg (Scatter and Gather) workflow with different data splits, as well as FedProx and FedOpt aggregation methods.

The best strategy for a particular model type will ultimately depend on the model implementation as well as the input datasets, so it’s tough to make any concrete recommendations.

-Kris

Thank a lot @kkersten . I understood now, I basically asked this question because of quality of model outcome after retraining. In one of our tasks, we trained the model, and retraining the model using few more images on top of already existing weights.
The quality of outcome when we train model using entire set of images are for better than when we used retraining approach. This is for object detection task. I will go through the material you had shared.