So I am trying to run the NanoOwl Generative AI example:
But I seem to be having an Out of memory error.
I am running it on the AGX Orin 32 GB.
When I run this example:
python3 owl_predict.py --prompt=“[an owl, a glove]” --threshold=0.1 --image_encoder_engine=…/data/owl_image_encoder_patch32.engine
I get this Memory error
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 4973.43 GiB (GPU 0; 29.88 GiB total capacity; 647.78 MiB already allocated; 22.52 GiB free; 684.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.