As our valued NVIDIA DGX customer, we’re giving you direct access to our best practices and AI expertise through a series of live technical sessions. Get answers to your questions about DGX systems, with topics ranging from planning to deployment to ongoing optimization. The sessions are led by NVIDIA DGXperts — AI-fluent professionals who have deployed thousands of DGX systems like yours. These sessions are exclusive to DGX users, and registration links to upcoming sessions will be posted here.
Make sure you get communications on future sessions by signing up for an account on our enterprise support portal. Your NVIDIA enterprise account manager can easily add you to our portal or you can contact us firstname.lastname@example.org
Due to overwhelming interest in the new Multi-Instance GPU (MIG) feature of DGX A100, we had three sessions that explored it in more detail. DGX A100 with MIG enables your team to support more AI workloads, right-size resources for every job, and increase overall system utilization. Check out the replay here:
MIG Technical Series (Part 1 of 3): Overview of MIG on DGX
MIG Technical Series (Part 2 of 3): MIG Use and Configuration on DGX
MIG Technical Series (Part 3 of 3): MIG in a Cluster
Make sure you all sign up for our future monthly sessions! We’d love to see you there!
Our next technical session is on Friday, August 19 at 8am PDT. Register here . Note: Registration is required and will only be approved for customers that have an active support contract.
FinMegatron: Building and Explaining Large Finance Language Models in English/Chinese/Japanese
We’ll introduce a novel transformer-based model, FinMegatron. We’ll demonstrate how to customize NVIDIA Megatron-LM, and pre-train it using finance domain-specific data (such as financial news and technical reports) under NVIDIA DGX SuperPOD and DGX A100 systems.
Furthermore, we will show how to adapt the architecture to incorporate temporal (time-sensitive) information on top of the standard language data. We extend Megatron to incorporate seven additional pre-training tasks important within the financial context, such as financial causal inference and time-sensitive pre-training for market understanding. We have built English/Chinese/Japanese BERT/GPT2 models of FinMegatron and released them through NVIDIA NGC, and more languages are on the way.
Xianchao Wu, Senior Solution Architect, NVIDIA