I have flashed jetpack 4.2 on my TX2 board.
I trained an object detection model using Feature pyramid network(model size is 242MB).
Whenever I try the inference code on TX2 - when starting the session for processing a frame, the process is getting killed(it’s able to load the model though).
I tried limiting/allocating the full memory usage using tensorflow (version 1.13), didn’t work.
Anyone has any idea to solve this issue ?
Thanks in advance.
If you run something like “htop” (“sudo apt-get install htop”) or tegrastats, is memory filling up prior to kill? Also, is there a message about memory at the moment of kill if you run “dmesg --follow”?
yes memory is filling up prior to kill.
Facing the same issue.
Generally speaking, swap won’t help with GPU operations (physical RAM is required). Some other processes could be swapped out though, and thus swap might save a small bit of RAM for GPU. Running fewer threads of concurrent operation will reduce the amount of RAM required.