I am trying to install NVIDIA NIM on my personal laptop following the procedure in this page:
Can NVIDIA NIM be installed on my personal laptop which has Intel GPU?
Also if I have to eventually rent a VM from AWS to test NIM, what config of VM should I look for?
If you do not have GPU infrastructure to self-host NIM, please check out LaunchPad.
NVIDIA LaunchPad provides free access to enterprise NVIDIA hardware and software through an internet browser. Users can experience the power of AI with end-to-end solutions through guided hands-on labs or as a development sandbox. Test, prototype, and deploy your own applications and models against the latest and greatest that NVIDIA has to offer.
I am following the commands in this page on “docker” tab to do the installation:
The commands given are the followings which I can’t run on windows even if I have WSL2 and the docker with Ubuntu 22.04 installed.
So I am wondering where I need exactly to execute these commands if I am working with Windows11.
Should I execute them thru WSL or CMD?
I have tried to get an instance from Google cloud with an A100 GPU to be able to test NIM, but my request is declined.
Is there a chance I run NIM anywhere else for free?
I also tried loading docker on Oracle cloud and got the GPU compatibility error, because that VM uses an AMD GPU.
Also in this link, GitHub - NVIDIA/nim-anywhere: Accelerate your Gen AI with NVIDIA NIM and NVIDIA AI Workbench, it says to use AI workbench and when I try to install it on my personal laptop I get docker installation failure. Do I have to use this AI workbench on a VM that has NVIDIA GPU on it?
Hi @ariuskudar – to run NIM you need a machine with an NVIDIA GPU and with the ability to launch docker containers. You can use AI Workbench to connect to a server that has an NVIDIA GPU but either way you’ll have to reserve the cloud VM.
Any cloud VM with an NVIDIA GPU with Compute Capability > 7.0 should work with NIM. So on AWS, that would be P5, P4, P3, G6e, G6, G5g, G5, or G4dn instances.
In terms of trying things for free – the hosted APIs on build.nvidia.com are the same as what you would get from deploying NIM yourself. If you need to deploy NIM yourself, I’d recommend taking another look at the Launchpad NVIDIA NIM for deploying large language models (LLMs).
I finally could get things going and run the codes in the project notebook here:
but I am getting the following error that some nemo modules are not installed, can you please instruct:
Traceback (most recent call last): File "[/NeMo/examples/nlp/language_modeling/tuning/megatron_gpt_finetuning.py", line 18](http://localhost:8887/NeMo/examples/nlp/language_modeling/tuning/megatron_gpt_finetuning.py#line=17), in <module> from nemo.collections.nlp.models.language_modeling.megatron_gpt_sft_model import MegatronGPTSFTModel ModuleNotFoundError: No module named 'nemo'
I tried this code block, and it says Cython is not installed, although my terminal shows that it is:
line 318, in run_setup
exec(code, locals())
File "<string>", line 5, in <module>
ModuleNotFoundError: No module named 'Cython'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.