Hi! My apologies, if possible, could someone please help me with fine-tuning of the MegaMolBART model? I’m essentially trying to replicate the fine-tuning process from this repository (GitHub - Sanofi-Public/LipoBART ) on my own set of molecules, but it seems that there were some updates in BioNeMo documentation that create conflicts when I try to follow the tutorial BioNeMo - MegaMolBART Inferencing for Generative Chemistry — NVIDIA BioNeMo Framework on a GCP VM. Is there a guide or a tutorial covering the whole process?
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Inference on finetuned MegaMolBart model fails "Unexpected key(s) in state_dict" provided model is fine | 1 | 237 | July 11, 2024 | |
| Issues with Integrating Fine-Tuned Models (e.g., MegaMolBART, ESM2) into BioNeMo: Unexpected Key(s) in State_dict | 1 | 88 | February 12, 2025 | |
| MolMIM example webpage missing | 2 | 71 | February 12, 2025 | |
| Request for API Endpoint Information | 1 | 841 | April 24, 2024 | |
| Train Generative AI Models for Drug Discovery with NVIDIA BioNeMo Framework | 1 | 375 | September 25, 2024 | |
| Unable to run megatron-20b-gpt model on nemo Q&Amodel | 1 | 1242 | August 22, 2023 | |
| [Tutorial] NeMo Framework Supervised fine-tuning (SFT) with Llama2 | 3 | 951 | January 30, 2024 | |
| How to Create a Custom Language Model | 0 | 473 | March 15, 2023 | |
| Deploying a 1.3B GPT-3 Model with NVIDIA NeMo Megatron | 3 | 1054 | March 31, 2023 | |
| Converting Nemo Model to Huggingface Format | 3 | 157 | August 6, 2025 |