I own a rtx 3070 but the whole rig sips like 500w off the wall per hour on full load even in idle it sips some 60-80w, now i do also own a gtx 1650 ti gaming laptop that barely consumes power and would be great to run chat with rtx on, if tensorflow runs fine, and having trained image recognition models on my gtx based laptop with a good amount of ram so why cant chat with rtx run on a Turing based gtx gpu as long as its fed ram/memory whats stopping it ? other than wanting to sell more rtx 30 and 40 based cards and new gpus is there a technical limitation that stops it from running on gtx gpu’s ?
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Would Chat with RTX work using the RTX-3080 card? | 2 | 579 | February 20, 2024 | |
Error launching "Chat with RTX" | 1 | 211 | June 29, 2024 | |
Chat with RTX NVIDIA installer failed | 0 | 306 | April 4, 2024 | |
ChatRTX compatibility with A100 GPU | 0 | 100 | August 6, 2024 | |
Rtx 2080 ...not the Ti ...Does it have any cappabilities in AI models...i want to buy a jetson but i cant right now | 0 | 770 | November 10, 2021 | |
Chat With RTX Error | 0 | 264 | March 26, 2024 | |
GeForce 1660 Ti Mobile GPU Poor Performance with Mixed Precision | 0 | 935 | August 27, 2021 | |
CUDA Toolkit support for GeForce RTX 3060 Laptop GPU | 5 | 12963 | October 12, 2021 | |
NVIDIA T4 Power Consumption | 0 | 663 | August 11, 2020 | |
Chat with RTX setup issue | 4 | 5624 | February 19, 2024 |