Hi! My apologies, if possible, could someone please help me with fine-tuning of the MegaMolBART model? I’m essentially trying to replicate the fine-tuning process from this repository (GitHub - Sanofi-Public/LipoBART ) on my own set of molecules, but it seems that there were some updates in BioNeMo documentation that create conflicts when I try to follow the tutorial BioNeMo - MegaMolBART Inferencing for Generative Chemistry — NVIDIA BioNeMo Framework on a GCP VM. Is there a guide or a tutorial covering the whole process?
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Issues with Integrating Fine-Tuned Models (e.g., MegaMolBART, ESM2) into BioNeMo: Unexpected Key(s) in State_dict | 1 | 31 | February 12, 2025 | |
Train Generative AI Models for Drug Discovery with NVIDIA BioNeMo Framework | 1 | 331 | September 25, 2024 | |
Inference on finetuned MegaMolBart model fails "Unexpected key(s) in state_dict" provided model is fine | 1 | 176 | July 11, 2024 | |
Request for API Endpoint Information | 1 | 789 | April 24, 2024 | |
NeMo Tutorial ModuleNotFoundError: No module named 'megatron.core' | 2 | 2105 | April 11, 2024 | |
BioNeMo Early Access | 2 | 2857 | June 15, 2023 | |
How do I obtain the attention, query and key of fine tuned model output? | 0 | 683 | October 25, 2022 | |
MolMIM example webpage missing | 2 | 19 | February 12, 2025 | |
Extension for ChatGPT | 4 | 791 | June 28, 2023 | |
Unable to run megatron-20b-gpt model on nemo Q&Amodel | 1 | 1193 | August 22, 2023 |