Issues with Integrating Fine-Tuned Models (e.g., MegaMolBART, ESM2) into BioNeMo: Unexpected Key(s) in State_dict

Has anyone encountered issues with the integration of custom fine-tuned models (like MegaMolBART or ESM2) into BioNeMo’s inference pipeline? Specifically, I’m facing problems with unexpected key(s) in the state_dict, even when the model appears to be correctly trained. Is there a known compatibility issue with certain pre-trained models, or do I need to adjust any specific configurations in the BioNeMo system to handle these models?

Hi @julianwelchmuniz ,

Can you provide more details regarding the BioNeMo framework version, the type of the finetune model, and the error log? Assuming BioNeMo v.1.x, here is a link to an example for fine-tuning and inferencing: Finetune pre-trained models in BioNeMo — NVIDIA BioNeMo Framework

Thanks.

1 Like