Chat with RTX Model Version Problem While Starting

Hey there,

I succesfully installed Chat with RTX on my computer (i9 RTX3090Ti 32GB RAM and enough disk size). It installed correctly but when I click program I got this error below:

Any suggestion to fix this ?

Environment path found: C:\Users\mehmet\AppData\Local\NVIDIA\ChatWithRTX\env_nvd_rag
App running with config
 {
    "models": {
        "supported": [
            {
                "name": "Mistral 7B int4",
                "installed": true,
                "metadata": {
                    "model_path": "model\\mistral\\mistral7b_int4_engine",
                    "engine": "llama_float16_tp1_rank0.engine",
                    "tokenizer_path": "model\\mistral\\mistral7b_hf",
                    "max_new_tokens": 1024,
                    "max_input_token": 7168,
                    "temperature": 0.1
                }
            },
            {
                "name": "Llama 2 13B int4",
                "installed": true,
                "metadata": {
                    "model_path": "model\\llama\\llama13_int4_engine",
                    "engine": "llama_float16_tp1_rank0.engine",
                    "tokenizer_path": "model\\llama\\llama13_hf",
                    "max_new_tokens": 1024,
                    "max_input_token": 3900,
                    "temperature": 0.1
                }
            }
        ],
        "selected": "Mistral 7B int4"
    },
    "sample_questions": [
        {
            "query": "How does NVIDIA ACE generate emotional responses?"
        },
        {
            "query": "What is Portal prelude RTX?"
        },
        {
            "query": "What is important about Half Life 2 RTX?"
        },
        {
            "query": "When is the launch date for Ratchet & Clank: Rift Apart on PC?"
        }
    ],
    "dataset": {
        "sources": [
            "directory",
            "nodataset"
        ],
        "selected": "directory",
        "path": "dataset",
        "isRelative": true
    },
    "strings": {
        "directory": "Folder Path",
        "nodataset": "AI model default"
    }
}
[03/25/2024-12:33:38] You try to use a model that was created with version 2.5.1, however, your version is 2.2.2. This might cause unexpected behavior or errors. In that case, try to update to the latest version.



โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ C:\Users\mehmet\AppData\Local\NVIDIA\ChatWithRTX\RAG\trt-llm-rag-windows-main\app.py:109 in      โ”‚
โ”‚ <module>                                                                                         โ”‚
โ”‚                                                                                                  โ”‚
โ”‚   106 )                                                                                          โ”‚
โ”‚   107                                                                                            โ”‚
โ”‚   108 # create embeddings model object                                                           โ”‚
โ”‚ โฑ 109 embed_model = HuggingFaceEmbeddings(model_name=embedded_model)                             โ”‚
โ”‚   110 service_context = ServiceContext.from_defaults(llm=llm, embed_model=embed_model,           โ”‚
โ”‚   111 โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”‚      context_window=model_config["max_input_to   โ”‚
โ”‚   112 โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”‚      chunk_overlap=200)                          โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ C:\Users\mehmet\AppData\Local\NVIDIA\ChatWithRTX\env_nvd_rag\lib\site-packages\langchain\embeddi โ”‚
โ”‚ ngs\huggingface.py:66 in __init__                                                                โ”‚
โ”‚                                                                                                  โ”‚
โ”‚    63 โ”‚   โ”‚   โ”‚   โ”‚   "Please install it with `pip install sentence-transformers`."              โ”‚
โ”‚    64 โ”‚   โ”‚   โ”‚   ) from exc                                                                     โ”‚
โ”‚    65 โ”‚   โ”‚                                                                                      โ”‚
โ”‚ โฑ  66 โ”‚   โ”‚   self.client = sentence_transformers.SentenceTransformer(                           โ”‚
โ”‚    67 โ”‚   โ”‚   โ”‚   self.model_name, cache_folder=self.cache_folder, **self.model_kwargs           โ”‚
โ”‚    68 โ”‚   โ”‚   )                                                                                  โ”‚
โ”‚    69                                                                                            โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ C:\Users\mehmet\AppData\Local\NVIDIA\ChatWithRTX\env_nvd_rag\lib\site-packages\sentence_transfor โ”‚
โ”‚ mers\SentenceTransformer.py:95 in __init__                                                       โ”‚
โ”‚                                                                                                  โ”‚
โ”‚    92 โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   use_auth_token=use_auth_token)                     โ”‚
โ”‚    93 โ”‚   โ”‚   โ”‚                                                                                  โ”‚
โ”‚    94 โ”‚   โ”‚   โ”‚   if os.path.exists(os.path.join(model_path, 'modules.json')):    #Load as Sen   โ”‚
โ”‚ โฑ  95 โ”‚   โ”‚   โ”‚   โ”‚   modules = self._load_sbert_model(model_path)                               โ”‚
โ”‚    96 โ”‚   โ”‚   โ”‚   else:   #Load with AutoModel                                                   โ”‚
โ”‚    97 โ”‚   โ”‚   โ”‚   โ”‚   modules = self._load_auto_model(model_path)                                โ”‚
โ”‚    98                                                                                            โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ C:\Users\mehmet\AppData\Local\NVIDIA\ChatWithRTX\env_nvd_rag\lib\site-packages\sentence_transfor โ”‚
โ”‚ mers\SentenceTransformer.py:840 in _load_sbert_model                                             โ”‚
โ”‚                                                                                                  โ”‚
โ”‚   837 โ”‚   โ”‚   modules = OrderedDict()                                                            โ”‚
โ”‚   838 โ”‚   โ”‚   for module_config in modules_config:                                               โ”‚
โ”‚   839 โ”‚   โ”‚   โ”‚   module_class = import_from_string(module_config['type'])                       โ”‚
โ”‚ โฑ 840 โ”‚   โ”‚   โ”‚   module = module_class.load(os.path.join(model_path, module_config['path']))    โ”‚
โ”‚   841 โ”‚   โ”‚   โ”‚   modules[module_config['name']] = module                                        โ”‚
โ”‚   842 โ”‚   โ”‚                                                                                      โ”‚
โ”‚   843 โ”‚   โ”‚   return modules                                                                     โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ C:\Users\mehmet\AppData\Local\NVIDIA\ChatWithRTX\env_nvd_rag\lib\site-packages\sentence_transfor โ”‚
โ”‚ mers\models\Pooling.py:120 in load                                                               โ”‚
โ”‚                                                                                                  โ”‚
โ”‚   117 โ”‚   โ”‚   with open(os.path.join(input_path, 'config.json')) as fIn:                         โ”‚
โ”‚   118 โ”‚   โ”‚   โ”‚   config = json.load(fIn)                                                        โ”‚
โ”‚   119 โ”‚   โ”‚                                                                                      โ”‚
โ”‚ โฑ 120 โ”‚   โ”‚   return Pooling(**config)                                                           โ”‚
โ”‚   121                                                                                            โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
TypeError: Pooling.__init__() got an unexpected keyword argument 'pooling_mode_weightedmean_tokens'
Press any key to continue . . .

I was about to post for this issue!
I will try installing an older driver and see if it works

Getting the EXACT same error message.
I successfully installed it once, for user A and running it worked out of the box! Bravo.
Then I realized that it was a hassle trying to run it when logged in with user B (which I must be), so I uninstalled it.
Re-installing it works but launching it now, I get the same error as you do. And same model version warning.

EDIT: I installed version 2.5.1 of the package but trying to launch now gives me more wild errors that I realize there is no way to get pastโ€ฆ sigh

What are they talking about for โ€œversion 2.5.1โ€?

You can see in the error message in the command window sentence-transformers. Itโ€™s a python package.
However, as I mention, upgrading it to 2.5.1 just gives me more intricate problems.

I tried upgrading it too (2.5.1 even if the latest version is 2.6.0) and it just ignored it, saying I need to install it.

This fixed it for me: Rtx with Chat successfully installed but causes error when run - #4 by aakashr1996

1 Like

This fixed it for me as well,

  1. Copy the content of Pooling.py

  2. Paste it under \NVIDIA\ChatWithRTX\env_nvd_rag\Lib\site-packages\sentence_transformers\models\Pooling.py

Here is the result. It shows the same error for a while then reinstall with new values and program started.

Environment path found: C:\Users\mehmet\AppData\Local\NVIDIA\ChatWithRTX\env_nvd_rag
App running with config
 {
    "models": {
        "supported": [
            {
                "name": "Mistral 7B int4",
                "installed": true,
                "metadata": {
                    "model_path": "model\\mistral\\mistral7b_int4_engine",
                    "engine": "llama_float16_tp1_rank0.engine",
                    "tokenizer_path": "model\\mistral\\mistral7b_hf",
                    "max_new_tokens": 1024,
                    "max_input_token": 7168,
                    "temperature": 0.1
                }
            },
            {
                "name": "Llama 2 13B int4",
                "installed": true,
                "metadata": {
                    "model_path": "model\\llama\\llama13_int4_engine",
                    "engine": "llama_float16_tp1_rank0.engine",
                    "tokenizer_path": "model\\llama\\llama13_hf",
                    "max_new_tokens": 1024,
                    "max_input_token": 3900,
                    "temperature": 0.1
                }
            }
        ],
        "selected": "Mistral 7B int4"
    },
    "sample_questions": [
        {
            "query": "How does NVIDIA ACE generate emotional responses?"
        },
        {
            "query": "What is Portal prelude RTX?"
        },
        {
            "query": "What is important about Half Life 2 RTX?"
        },
        {
            "query": "When is the launch date for Ratchet & Clank: Rift Apart on PC?"
        }
    ],
    "dataset": {
        "sources": [
            "directory",
            "nodataset"
        ],
        "selected": "directory",
        "path": "dataset",
        "isRelative": true
    },
    "strings": {
        "directory": "Folder Path",
        "nodataset": "AI model default"
    }
}
[03/25/2024-13:58:57] You try to use a model that was created with version 2.5.1, however, your version is 2.2.2. This might cause unexpected behavior or errors. In that case, try to update to the latest version.



Generating new values
Parsing nodes: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 30/30 [00:00<00:00, 185.30it/s]
Generating embeddings: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 156/156 [00:04<00:00, 37.50it/s]
The file at ./config/preferences.json does not exist.
Open http://127.0.0.1:44939?cookie=a615b0bc-6921-4d8a-80e1-8ff8663f4f34&__theme=dark in browser to start Chat with RTX
Running on local URL:  http://127.0.0.1:44939

To create a public link, set `share=True` in `launch()`.
1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.