TAO toolkit API with rtx3090

Please provide the following information when requesting support.

• Hardware (RTX3090)
• Network Type (Detectnet_v2)
• TLT Version (3.22.05)
• How to reproduce the issue ?

TAO 3.22.05 version is in use.
I have rtx 3090 x 4ea in one PC.
Can I configure GPU-operator with a regular 3090 graphics card instead of a dedicated GPU in the Kubernetes-based TAOToolkit API?

Do you mean in your PC there are 4 gpus, and you want to configure GPU-operator to only one gpu?

Thank you for your answer
I was wondering if 3090 is possible.
Each pods to access GPU resources are possible, right?

Also, whether training is possible only with the GPU number specified in the TAO toolkit API
AND Can multiple gpu be used on one node (e.g. 3090)?)

Yes, it is possible.

Yes, it is possible. See Deployment - NVIDIA Docs

  • numGpu is the number of GPU assigned to each job. Note that multi-node training is not yet supported, so one would be limited to the number of GPUs within a cluster node for now

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.