Is Tensor Flow 2 C++ API multithreaded and runs on the GPU?
Can people experienced in the Tensor Flow C++API answer this?
I was duplicating a Python tutorial for Tensor Flow 2 in C++ using the APIs. Found that the C++API is not multithreaded and was not using the GPU. I did some trials and seems like my system is configured properly (Cuda code runs on the GPU and the Python API works multithreaded and on the GPUs).
As of my last knowledge update in January 2022, TensorFlow’s C++ API did not natively provide multithreading support. TensorFlow primarily focuses on Python as its main API, and the C++ API is often used for lower-level tasks or for integrating TensorFlow into existing C++ applications.
However, you can still use TensorFlow in a multithreaded C++ program, but you need to manage the threading yourself. You can create multiple threads in your C++ program and distribute the work of running TensorFlow operations across those threads. You should be cautious about managing TensorFlow resources and sessions when working with multiple threads, as improper management can lead to resource conflicts or synchronization issues.
Keep in mind that the state of software libraries like TensorFlow can change over time, and the TensorFlow project may have added multithreading support in subsequent versions. I recommend checking the official TensorFlow documentation or release notes for any updates or changes in this regard, as my information may be outdated.