Use of GPU by OpenCV and Tensorflow (TX1 with Jetpack 3.3)

Hi there. I just installed opencv (from jetsonhacks tutorial fot TX1) and also tensorflow (

How do I choose how much of the GPU will OpenCV and Tensorflow use?

The reason I ask is: I am running a python object detection script (tensorflow api). When it starts the tensorflow sessions, it shows on terminal the total memory of the GPU and also the free memory (which TF will use in this script). Everytime I run my script I see that TX1 has total of 3.98GB in GPU and approximately 1.05GB free for tensorflow to use. I feel that is not much for the purpose I want.

Is it possible to allocate more GPU for tensorflow and also choose whether or not (and how much) will OpenCV use?

Thanks a lot!

Be aware that Jetson have integrated GPU sharing the same memory as CPUs, as opposed to discrete GPU having their own memory.

3.98 GB is the total memory on a TX1. Linux and X server and Ubuntu would use a few hundreds of MB, so you would have more than 3.5 GB available.

If you have CPU processes having allocated memory resulting in more than 2.5 GB of memory being used (by opencv or python and its dependencies, …), it would be normal that only 1 GB is available.

If this is not your case, maybe there is a setting for tensorflow available memory, but sorry I can’t tell, someone else would have to answer this.

Thank you for your answer.

I will have to find a way to free more memory before firing tensorflow. If someone else know how to do it, I would really appreciate to know more about.