Chat with RTX working but with Launch errors

I was told by Chat with Nvidia to post my problem here.

I downloaded the installation to my E drive and then set it up In C drive as normal The launch.bat went through its normal process and the app loaded into the browser ok. The trouble is it’s set up without Llama B. Now I see there are a lot of other problems and errors in the launch.bat. I had Chat with RTX to explain the errors to me and the following is what it said: [I have no coding skills yet]

The chat with RTX launch.bat is having some issues. There are a few different things that are causing problems. One of the issues is that the model “WhereIsAI_UAE-Large-V1” is not found on the computer. This means that the program is trying to use a model that is not installed or is not available.

Another issue is that there is a problem with the connection between the computer and the server that the program is trying to connect to. This is causing the program to stop working suddenly.

There is also a problem with the model that is being used. The program is trying to use a model that is not available or is not working properly. This is causing the program to stop working suddenly. [Strangely enough, it’s working fine using Mistral 7 as the model.]

It’s important to fix these issues so that the chat with RTX launch.bat can work properly.

Here is the text copy of the Launch.bat:
Environment path found: C:\Users\Larry\AppData\Local\NVIDIA\ChatWithRTX\env_nvd_rag
App running with config
{
“models”: {
“supported”: [
{
“name”: “Mistral 7B int4”,
“installed”: true,
“metadata”: {
“model_path”: “model\mistral\mistral7b_int4_engine”,
“engine”: “llama_float16_tp1_rank0.engine”,
“tokenizer_path”: “model\mistral\mistral7b_hf”,
“max_new_tokens”: 1024,
“max_input_token”: 7168,
“temperature”: 0.1
}
},
{
“name”: “Llama 2 13B int4”,
“installed”: false,
“metadata”: {
“model_path”: “model\llama\llama13_int4_engine”,
“engine”: “llama_float16_tp1_rank0.engine”,
“tokenizer_path”: “model\llama\llama13_hf”,
“max_new_tokens”: 1024,
“max_input_token”: 3900,
“temperature”: 0.1
}
}
],
“selected”: “Mistral 7B int4”
},
“sample_questions”: [
{
“query”: “How does NVIDIA ACE generate emotional responses?”
},
{
“query”: “What is Portal prelude RTX?”
},
{
“query”: “What is important about Half Life 2 RTX?”
},
{
“query”: “When is the launch date for Ratchet & Clank: Rift Apart on PC?”
}
],
“dataset”: {
“sources”: [
“directory”,
“youtube”,
“nodataset”
],
“selected”: “directory”,
“path”: “dataset”,
“isRelative”: true
},
“strings”: {
“directory”: “Folder Path”,
“youtube”: “YouTube URL”,
“nodataset”: “AI model default”
}
}
[03/02/2024-09:25:18] No sentence-transformers model found with name C:\Users\Larry/.cache\torch\sentence_transformers\WhereIsAI_UAE-Large-V1. Creating a new one with MEAN pooling.
Using the persisted value form E:/Holding Files_vector_embedding
Open http://127.0.0.1:5211?cookie=7183ef4e-1b2c-409a-a9a2-d55662e7f1c1&__theme=dark in browser to start Chat with RTX
Running on local URL: http://127.0.0.1:5211

To create a public link, set share=True in launch().
[03/02/2024-09:25:27] Exception in callback _ProactorBasePipeTransport._call_connection_lost(None)
handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)>
Traceback (most recent call last):
File “C:\Users\Larry\AppData\Local\NVIDIA\ChatWithRTX\env_nvd_rag\lib\asyncio\events.py”, line 80, in _run
self._context.run(self._callback, *self._args)
File “C:\Users\Larry\AppData\Local\NVIDIA\ChatWithRTX\env_nvd_rag\lib\asyncio\proactor_events.py”, line 165, in _call_connection_lost
self._sock.shutdown(socket.SHUT_RDWR)
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
[03/02/2024-09:25:27] Exception in callback _ProactorBasePipeTransport._call_connection_lost(None)
handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)>
Traceback (most recent call last):
File “C:\Users\Larry\AppData\Local\NVIDIA\ChatWithRTX\env_nvd_rag\lib\asyncio\events.py”, line 80, in _run
self._context.run(self._callback, *self._args)
File “C:\Users\Larry\AppData\Local\NVIDIA\ChatWithRTX\env_nvd_rag\lib\asyncio\proactor_events.py”, line 165, in _call_connection_lost
self._sock.shutdown(socket.SHUT_RDWR)
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host

Can anyone help me sort this out so I can use Llama and whatever else I’m missing? I will be most grateful. Cheers for Now, Laz77

I believe it’s a similar error to this post https://forums.developer.nvidia.com/t/error-with-chat-with-rtx/282880

Some network firewalls seem to be blocking the huggingface repository dependencies necessary to run.

A temp fix right now is to run a VPN before starting ChatRTX to bypass that block.

Thanks so much for your reply tricky I will follow that up, give it a try and see how it goes. I’ll let you know.

I got the free VPN from Proton but unfortunately, it didn’t work for me. Looks like I’ll have to wait for a fix to come along.

Did you get this fixed?

What RTX are you using how much memory?

If you got it fixed great I had no issues with the install except a minor memory issue which i solved

Ensure Llama 2 is fully downloaded and paths in the config file are correct. For the UAE WhereIsAI_UAE-Large-V1 error, verify it’s installed or switch to a compatible model. Reinstalling might also resolve connection resets. Good luck.

Hey mate, great advice it really helped fix my problem i was facing same problem Thanks for the help.