TensorRT LLM

Thank you so much. That wasn’t clear to me.

I’ll use vLLM for now.
(You mention SGLang but from this pinned topic I understand that there’s no SGLang container yet. )

Do you expect that support will be added to TensorRT LLM for Thor’s GPU architecture (sm_110)?