Tesla plus generic video card while using TAO Toolkit

Hoping this is the correct forum area. I have a number of remote PCs that I have to do training and later production with. The PCs are running Ubuntu 20.04, have a Tesla K80 and also an inexpensive GeForce GT 1030 for video for those onsite. I have not used a setup like this before, where the 1030 should be ignored and only the Tesla be used for training and inference. In the TAO Toolkit for example, I can put the number of GPUs to use, so I would want to set this to “2”. But which two will it use? I have searched here and probably am not using the correct keywords. How do I ensure the Telsa’s two GPUs are used and the 1030 ignored. Thanks to all that answer here!


It appears to be more related to TAO Toolkit. We are moving this post to the TAO Toolkit forum to get better help.

Thank you.

1 Like

Actually it is not a specific question for TAO Toolkit.
To specify one GPU or several GPUs, you can use
--gpus 1 --gpu_index 0
--gpus 2 --gpu_index 0 1

The gpu_index can be found when run $nvidia-smi

1 Like

Thank you Morgan!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.