Hi @dusty_nv , I have run the train_ssd.py program successfully.
When I executed in the Colab environment, I had 16GB of graphics card memory, but it always only used 2GB.
I checked all the parameters in the ssd_train.py file cannot be found to set GPU memory. I hope you can tell me how to modify the parameters to make the graphics card fully functional.
Here is my process and memory use in Google Colab.
Thanks.
Hi @a2773545809, try increasing the batch size with the --batch-size
argument to train_ssd.py. That will use more GPU memory. So will increasing the model resolution (e.g. --resolution=512
), but that will also increase the computation/memory needed during inferencing runtime.
Hi @dusty_nv , thank you for your timely reply. This has been very helpful to me.
In this way, it can fully utilize graphics card resources to accelerate model training speed.
Thank you again for your timely response to my foolish question. q(≧▽≦q)
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.