Chat With RTX Latest archive file corrupted

I still have the problem .

I can download more data but sadly can not download the full file .

Hope it’s not too inconvenient but do you mind trying to download and extract again? 02/28 and I’m having the same issues where the download is ~1.2GB and the extracted contents are ~1.3GB. 7zip reporting the already posted “unexpected end of data” error. I want to see if there’s an issue on my end or NVIDIA’s.

Sorry spoke way too soon, tried download a third time and it worked. Weird I didn’t get any errors from chrome on the first 2 unsuccessful downloads, but idk how to figure out what went wrong there. Regardless, I have it downloaded successfully now and total file sizes look accurate.

Sorry for my delayed reply. Good to hear that you were finally able to download the file. Because of its very large size, I used a resumeable software to download it. I have no problem downloading, but I still have difficulty in installing it.

( - is still corrupted
trying diffetent browsers, inedpendent downlader and still no luck “Unconfirmed start of archive”

Can you confirm the file size? LIke when I check the file properties of the archive, I get 37,680,476,142 bytes (Size of archive, not size on disk since size on disk is variable).

I also used powershell’s:

Get-FileHash C:\pathToYourArchive\ | Format-List

Algorithm : SHA256
Hash : 80B7C0C255706F3C863B8120F4908FC9C2F41861F73F3BF6DC056119520E1505

For those getting an error from file extraction, then the download is probably corrupted and you need to try again. You would have to try ethernet/overnight if possible. There might also be issues on NVIDIA’s server(s) hosting the file. Idk if NVIDIA would let us self host the archive for other users (probably not). Try not using a resumable downloader and just overnight it in browser. Resumable downloader really shouldn’t be the problem at all but idk why these downloading/extracting issues are so widespread.

Also @li_shenggang what installation errors are you getting? Fwiw the installation process took an entire hour for me (my hardware shouldn’t be the problem i7-12700, 32GB DDR5, 2TB Solidigm (~3000MB/s), and 3060 Ti). It looked like it was hanging but I gave it the benefit of the doubt and it eventually installed. It looks like ChatRTX installs literally everything all over again (miniconda, python dependencies, etc) which is kind of a good thing for working on everyone’s pc.

For anyone that might find this useful Directory Structure after extraction:

            UI Stuff
                    llama_tp1_rank0.npz (26,388,067,120 bytes)
                    mistral_tp1_rank0.npz (14,695,822,368 bytes)
            Other Stuff

I also get the same size for this file:

-rwxr-xr-x. 1 root root 37680476142 Mar 3 16:14
-rwxr-xr-x. 1 root root 37680489005 Feb 17 20:10

As you can see, I actually downloaded both files, and was able to generate a hash value for the first one:


I moved the file to a Linux filesystem, but both values the same as yours.

My problem is most likely due to network issue, since downloading files from github, conda, or even pip might be slow or unstable, so I tried a few times but didn’t succeed. I am hoping there can be a new version with all the necessary packages packed in the above installer to allow for offline installation, otherwise I may have to try to do some configurations for these sites for successful installation. By the way, my system should be OK, as I have 3090 with nearly the latest driver. It may also be better for the installer to display more details on what it is trying to do when it seems stuck.

Just in case anyone need the hash value for the older installer.


Definitely sounds like an issue as you mentioned with downloading from github/pip. During the installation the loading bar description was saying downloading something (can’t remember what it was downloading) for nearly the entire installation process. Maybe try installing it overnight? Good luck, really hoping you get this working since with a 3090 you’ll be able to run the 13B model easily. I’m still mostly using chatdocs and gpt4all since I can use >13B GGUF models with those.

Thanks for the suggestions, and I will try it again this weekend after I point pip to a faster mirror. I do hope to get ChatWithRTX running to see if it helps in my research. I currently run LLMs mostly using docker, but I haven’t tried RAG yet.

If you haven’t tried RAG at all, you might want to check out GitHub - nomic-ai/gpt4all: gpt4all: run open-source LLMs anywhere. They have a desktop application in beta as well that usually works.

Corrupt for me as well, multiple methods of downloading, it’s not my connection I don’t think I have 1GB down.

@TomNVIDIA perhaps a dumb question, but can you ask your back end people why they thought it was a good idea to put out a 35GB file with no hash listed on the website?
I work for a partner of yours and we publish them for files which are single digit MB in size.
If site updates are an issue, perhaps just a pointer here to a locked thread which someone updates as new versions roll out with the new hash info.

DateOfVersion1 = HASH 1
DateOfVersion2 = HASH 2
DateOfVersion3 = HASH 3

Hi @Casper042,

That is not a dumb question, we appreciate the feedback.
Our engineers will work on this ASAP. I will post back here in the thread when I have an update.


it stiill keeps breaking for me

@TomNVIDIA Still not working

1 Like

Can anyone verify me?
sha256: e48956999c8896d9e4308278fa2e8453cdeef9c5716cd5ca9f325f15734dfe00

If you get the same, maybe just give the post a like to reduce clutter.