How to add custom model to chat with rtx?

How do I add
Nous-Hermes-Llama2-13b

to chat with Rtx?

also why when I was installing chat with rtx llama option did not pop up?
i have the files for it?

1 Like

Dunno yet how to add another model, but you seem to have not enough GPU-memory to run Llama-13b, just Mistral-7b, like on my 8GB VRAM 3090 Razer laptop.

Cheers G.

1 Like

Ah I see

But I do have 16gb of vram I have a 4070
Ti it should be able to handle it

thought the 4070 ti only has 12 and the ti super has 16?

Oh yeah sorry I meant 12gb

I also want to know, I need to add a model that speaks in Portuguese