Inference from a tflite model using GPU?

Hello! I’ve gstreamer application written in Python that runs inference in the image frames using a tflite model by calling invoke on the Interpreter class from tflite (my code is here btw: GitHub - espiriki/JetsonGstreamer: My scripts for jetson gstreamer)

I’m wondering: is that inference running on CPU-only? Does it take advantage of the GPU at all? Is there any way to check that?

It takes around 150ms for a mobilenet_v1_1.0_224, this seems a bit slow to me

Thanks!

Hi,

This depends on if TFLite supports GPU mode.
A simple way to check is to monitor the GPU utilization with tegrastats.

$ sudo tegrastats

Thanks.

1 Like

Thank you! I’m seeing something like that:

RAM 968/1980MB (lfb 16x4MB) SWAP 0/5086MB (cached 0MB) CPU [67%@1479,78%@1479,84%@1479,81%@1479] EMC_FREQ 0% GR3D_FREQ 0% PLL@33C CPU@37C PMIC@100C GPU@31.5C AO@39.5C thermal@35.5C

I guess it is running full CPU

I’ll look into if tflite supports Jetson hardware

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.