Well, my dataset of approximately 130GB crashes. Yaeh, it’s a demo. nevertheless. And I am getting a lot of “.pdf with error: PyCryptodome is required for AES algorithm. Skipping…” whereas those PDF files are not encrypted.
Thank you very much.i’v now successed by trying you 2 advices. default location seems to be nessery. or it will fail in “miniconda” step
I have a sentence-transformers issue where chat with RTX stops with ModuleNotFoundError: No module named ‘sentence_transformers’.
After running “pip installs sentence-transformers” in the command prompt, faced another error with getting requirements to build wheel did not run successfully.
Feed both error logs to chatGPT for some ideas. It comes back with suggestions 1. Install Microsoft Visual C++ Build Tools (visual studio Community 2022, which is another ). 2.
try installing sentence-transformers without building the wheel using the --no-binary option. But the same build wheel error still remains.
Mistral 7B int4 and Llama 2 13B int4 seems installed successfully according to logs.
specs: windows 11 home, build 22621.3155
RTX 4080 12GB
4TB SSD
Python v3.12
chat with RTX installed on default path: C:\Users\abcdefg\AppData\Local\NVIDIA\ChatWithRTX
Some of similar interruptions comes from the SPACE between your username, for example, the space between “Taylor Swift”. This tiny bug functions when the installer is writing any information in your USERS folder. Find a way to eliminate the space, and you go smoothly with the installation.
Had the same issues in 2 computers, updating the drivers and disabling the antivirus solved it, hope this helps
There are scripts for running conda in a virtual environment that should have all the dependecies added during installation. It looks like that part broke somehow. The depencies can be found in two requirements.txt files though to add them manually you’ll have to figure out what the virtual environment for conda is.
after stopping the anti-virus, installation was completed successfully.
I am trying to install Chat with RTX, but when I open the NVIDIA Installer and move to the install portion it says “extracting…” and nothing ever moves. All of the files were unzipped and I left the original file path. My driver is updated and I am running windows 11 with a GeForce 3070.
Any help would be appreciated.
After extracting the first thing it does is install miniconda. I’m not sure the solution but the first thing I’d check. is if you can install the miniconda manually. You need to put installing it to UI your profiles appdata/local/Nvidia/miniconda I believe but double check.
NVIDIA Installer failed
Mistral 7B INT4 version 1.0 fails to install.
Chat with RTX installed successfully. Using default path on C: drive, 2TB of free space, SSD, no antivirus. Suggestions? or give up?
Did llama install? I typically use that one because it’s 13B. Try launching it without Mistral.
No idea. How do I check? How do I launch? Was expecting this to be easy, and not require a CS degree to install /s
During installation it tells you. 3 things are listed. Mistral, Chat with RTX, and llama. I’m assuming you got the message “not installed’ next to Mistral. Did you get the message " installed” next to llama?
Only 2 apps are listed during install. No mention of llama.
- Chat With RTX version 0.2 Installed
- Mistral 7B INT4 version 1.0 failed
That new to me. That probably means llama didn’t get installed. Without logs (which Chat with RTX doesn’t have) it’s hard to say what the problem or fix is. Some people reported they needed to run the installer as an administor. Beyond trying that you may have to wait until Nvidia releases a more stable installer.
Tried this, changed RAG file setting from default 15 to 7 and now the llma2 installer shows up. I can see three options now but llma2 fails to install.
I can’t install the Chat with RTX at the beginning like others. But I tried another user account without space in the username. It begins to install the dependencies.
Please check your username if contains space. It might be helpful.
Took me a few hours to figure out why it stucks at “downloading dependancies”. Make sure to turn on Windows PowerShell 2.0 in “Control Panel → Programs → Turn Windows features on or off”
My experience is exactly the same as yours. Changing the RAG file setting from 15 to 7 did cause the Llama2 option to show up but it failed just as your did.
I had exactly the same experience as scott164 including changing the RAG file setting for Llama2 from 15 to 7. I had disabled the antivirus software, or so I had thought and removed and re-installed Chat RTX several times including using the “clean install” option.
But on one attempt I saw that the antivirus software flagged a possible problem (I insisted it allow it) yet still it failed. Finally, I removed the antivirus software completely and did another removal of Chat with RTX followed by a clean installation. This time it was successful, and I was able to use Chat with RTX with the Mistral model loaded. No sign of Llama2 though.
After a lot of thought I realized my graphics card had only 12 Bytes of memory and when I looked at the llama 13b.nvi file I realized that the new installation had reset line 26 to:
string name=“MinSupportedVRAMSize” value=“15”/>
I reset the value to 7 and ran the installation again without the clean install option. It seemed to load the Llama2 model quickly and the program activated with both the llama2 and mistral models available.
Hope the above helps.