Currently, I’m facing an issue with only one GPU (46G), which causes an OOM error when loading the TRT engine. I’m planning to find a device with 2 GPUs, each with the same memory as before. How can I implement this on 2 GPUs?
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Using Multiple GPUS | 0 | 508 | August 20, 2019 | |
Is there any option to limit the gpu usage of tlt-traim? | 2 | 560 | October 12, 2021 | |
Outputs of tensorrt are too different according to the compute capabilities | 1 | 428 | November 2, 2022 | |
How to Limit GPU usage of Tensorrt Engine inference? | 2 | 921 | September 18, 2021 | |
Different orders of training same data causes OOM? | 2 | 337 | January 29, 2024 | |
Multi-GPU inference from caffemodel in TensorRT | 0 | 845 | January 31, 2018 | |
Can't allocate gpu memory to multiple gpus while training | 0 | 427 | January 2, 2019 | |
Tensorrt multiple process | 2 | 1502 | February 21, 2024 | |
Unable to run optimised network on Tx2 using tensorflow gpu | 4 | 444 | November 7, 2019 | |
Not able to install tensor RT in AGX Orin Module | 0 | 14 | September 9, 2024 |