Hello. I’m currently working with the diffusion-fwi model of physicsnemo. Could you please share how many GPUs were used during training, and roughly how long it took to arrive at your results? Thank you.
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| nvFP4 training - Playbook request | 12 | 470 | March 16, 2026 | |
| NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4 - Not good at following simple instructions | 1 | 349 | April 5, 2026 | |
| nvidia/Nemotron-Cascade-2-30B-A3B yet another model to test | 19 | 1287 | March 24, 2026 | |
| Train a Reasoning-Capable LLM in One Weekend with NVIDIA NeMo | 17 | 599 | August 17, 2025 | |
| Profiling and Optimizing Deep Neural Networks with DLProf and PyProf | 13 | 1609 | August 11, 2021 | |
| DGX Spark performance | 50 | 3765 | February 27, 2026 | |
| Setting New Records at Data Center Scale Using NVIDIA H100 GPUs and NVIDIA Quantum-2 InfiniBand | 0 | 352 | November 8, 2023 | |
| OpenAI GPT-OSS supports tool calling. Does NAT support this feature? | 0 | 105 | August 21, 2025 | |
| NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4 | 89 | 7840 | March 31, 2026 | |
| DGX Spark, Nemotron3, and NVFP4: Getting to 65+ tps | 14 | 1695 | December 22, 2025 |