Available with Small Language Model on tutorial

Can I use “Qwen/Qwen-7B” model and “meta-llama/Llama-2-7b” model for your Small Language Model tutorial?
These models exist in the hugging face.
(Qwen/Qwen-7B at main, meta-llama/Llama-2-7b at main)
Thank you.

Hi,

Do you mean the below tutorial?

In the tutorial, Llama-2-b can work on the Orin Nano. (16 tokens/sec).

Thanks.

1 Like

Hi @ygoongood12, I’ve not tested Qwen through MLC, but it appears to be supported in that API:

You may need to try it and add the chat template for it in NanoLLM. That SLM tutorial is using NanoLLM library and MLC LLM API for optimized performance. However there are lots of APIs supported on Jetson AI Lab - some being faster, some being easier to use. It looks like Qwen is in Ollama, which is easy to use, so you can try that first:

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.