Model not working on nano

Hi, I am new to using jetson devices and am seeking some help. I have a program that loads in an EfficientNetB3 model and makes a prediction, and when I run it on windows it always gets it right (and only uses under 400M). But when I run it on the jetson it always gets it wrong and I get a bunch of warnings saying Allocator (GPU_0_bfc) ran out of memory trying to allocate 93.35MiB with freed_by_count=0. Is it getting it wrong because of a memory issue, even though it is a small program? If so, is there a solution? Do I need to use TensorRT? Thanks.

Hi,

Do you use TensorFlow for inference?
If yes, it’s recommended to convert it to TensorRT.

Below is a tutorial for EfficientNet for your reference:

Thanks.