Add Llama 2 7b model to Chat with RTX?

Is there a way to add more than the default Mistral and Llama 13b models to the Chat with RTX?

3 Likes

I just installed and only see the Mistral model (7b int 4) lol. Looking to add the Llama model, too, and others if anyone can tell me how and it’s not over my head with techyness. :)

Hello. Most likely you do not have enough VRAM, which is what the installer detects. You probably need about 16gb VRAM for it to install. I am looking into a eGPU.

1 Like