I downloaded NOMIC AI gpt4all, but it is running really slow, as it is not utilizing the 4 2080 ti’s I have installed on my system (its only using cpu computing power). So far, I have tried Nvidia insight, uninstalling/reinstalling drivers, and using Nvidia PRIME. The only one of those that worked was NVidia PRIME, utilizing 1% of GPU 0 when running the chat executable (I used ‘prime-run ‘executable’’ in the terminal to do this.) However, the speed did not increase. Also, prime originally wasn’t working for me either, prime profiles were/still aren’t available in Nvidia settings on my system, and I had to write my own prime-run script for use. I am trying to find a way to utilize the computing power of all 4 of my gpus to run this executable chat file faster.
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| How to make an executable file run on NVIDIA gpus | 0 | 548 | July 19, 2023 | |
| Exe in Ubuntu must run with '__NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia’? | 2 | 684 | August 6, 2023 | |
| Prime-run doesn't use nividia card | 7 | 2784 | December 22, 2022 | |
| If we have code running on other nVidia GPUs, would it run automatically? | 1 | 480 | October 7, 2019 | |
| How to directly run ./test, instead of __NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia ./test | 4 | 418 | August 5, 2023 | |
| Run Nsight compute command in ubuntu 20.04 | 3 | 944 | August 22, 2022 | |
| [SOLVED] Run CUDA on dedicated NVIDIA GPU while connecting monitors to Intel HD graphics, is this possible? | 15 | 73041 | December 9, 2018 | |
| Running an application on Nvidia card | 6 | 8347 | May 25, 2020 | |
| On-demand mode doesn't use integrated gpu and is same as nvidia | 3 | 786 | May 7, 2022 | |
| Applications not using GPU inside docker container | 2 | 1515 | July 18, 2025 |