NVIDIA AI Foundation Models: Build Custom Enterprise Chatbots and Co-Pilots with Production-Ready LLMs

Originally published at: https://developer.nvidia.com/blog/nvidia-ai-foundation-models-build-custom-enterprise-chatbots-and-co-pilots-with-production-ready-llms/

Large language models (LLMs) are revolutionizing data science, enabling advanced capabilities in natural language understanding, AI, and machine learning. Custom LLMs, tailored for domain-specific insights, are finding increased traction in enterprise applications. The NVIDIA Nemotron-3 8B family of foundation models is a powerful new tool for building production-ready generative AI applications for the enterprise–fostering innovations…

I can not find the nvcr.io/ea-bignlp/ga-participants/nemofw-inference:23.10 container. Is it available ?
The mentioned link NVIDIA NGC could not be found.

I can just find nvcr.io/ea-bignlp/bignlp-inference:22.08-py3

I have just realized that my application to Nemo framework is not finished yet. It is being reviewed. Maybe is it the reason?

Hi Pedro, you’re correct that it is because your application for NeMo framework is not approved yet.

The NeMo Inference Container recently transitioned from beta to GA and requires you to reapply or agree to the new EULA in order to gain access. Please try the application again: https://developer.nvidia.com/nemo-framework and don’t hesitate to let us know if you still encounter issues.

Hi @viviennez now i can access nemo framework like nvcr.io/nvidia/nemo:23.10 but i can’t see nvcr.io/ea-bignlp/ga-participants/nemofw-inference:23.10 from [org/team] bignlp/ga-participants anymore. So how can i get access that container? Thanks you!

Hi @thangdt277, are you able to pull the container using the following commands?

Log in to your NGC organization

docker login nvcr.io

Fetch the NeMo framework inference container (latest path)

docker pull nvcr.io/ea-bignlp/ga-participants/nemofw-inference:23.10_fix_v2