Error launching "Chat with RTX"

Description

Hello.
When executing “chat with rtx” the following lines of code and error appear and it doesn’t start, previously install chatrtx and everything is fine!!

Environment

I have an RTX-4090 and core i9 14900 HX

Error

[TensorRT-LLM] TensorRT-LLM version: 0.9.0
Traceback (most recent call last):
File “C:\Users\HM\AppData\Local\NVIDIA\ChatWithRTX\RAG\trt-llm-rag-windows-main\app.py”, line 97, in
llm = TrtLlmAPI(
File “C:\Users\HM\AppData\Local\NVIDIA\ChatWithRTX\RAG\trt-llm-rag-windows-main\trt_llama_api.py”, line 123, in init
self._model = runner_cls.from_dir(**runner_kwargs)
File “C:\Users\HM\AppData\Local\NVIDIA\ChatRTX\env_nvd_rag\lib\site-packages\tensorrt_llm\runtime\model_runner.py”, line 483, in from_dir
model_config, other_config = read_config(config_path)
File “C:\Users\HM\AppData\Local\NVIDIA\ChatRTX\env_nvd_rag\lib\site-packages\tensorrt_llm\runtime\model_runner.py”, line 78, in read_config
return _builder_to_model_config(config)
File “C:\Users\HM\AppData\Local\NVIDIA\ChatRTX\env_nvd_rag\lib\site-packages\tensorrt_llm\runtime\model_runner.py”, line 132, in _builder_to_model_config
mamba_conv1d_plugin = bool(plugin_config[‘mamba_conv1d_plugin’])
KeyError: 'mamba_conv1d_plugin

Hi,
Apologies for delay,
You may raise issues related to TRT-LLM here