GPU not working with tensorflow

GPU does not do any computation when working with tensorflow in python.
RAM 2677/3963MB (lfb 1x2MB) SWAP 524/6144MB (cached 30MB) IRAM 0/252kB(lfb 252kB) CPU [36%@921,100%@921,off,off] EMC_FREQ 4%@1600 GR3D_FREQ 0%@76 APE 25 PLL@36.5C CPU@40.5C PMIC@100C GPU@38C AO@45.5C thermal@39.5C POM_5V_IN 2516/2569 POM_5V_GPU 77/67 POM_5V_CPU 619/642

It does not work when I am working with refular tf model nor tensorRT model.
Examples like detectNet, written in C/C++ work fine.

Tensorflow version is official tensorflow release for jetson nano.


What kind of TensorFlow use case do you execute?
A possible issue is that the application is waiting for I/O.

To check this, could you help to profile your application with nvprof and share the data with us?

sudo /usr/local/cuda/bin/nvprof python3 [APP] -o out.nvprof


Hello, thank you for your answer.

I use TensorFlow Object detection API with TensorRT custom model, but I have problems with GPU even if I only load the tf frozen graph.
I tried to run this example from NVidia github:

and I had the same problem.

I run your command, but I only got the log in terminal, no other file was created. Log is available here:


We checked the object detection API before.
The implementation isn’t optimal due to some conditional layer inside is still CPU version.
This will put GPU into the waiting state and degrade the performance.

Here is the corresponding topic for your reference: