Hi, after I boot up my Nvidia Jetson Xavier, I load very large Tensorflow and other models, which takes several minutes.
When I go to deploy in my autonomous edge-computing application, I have no peripherals, just an on-off switch.
When I hit the on-switch, I want the Xavier to boot up and start doing inference as quickly as possible.
Is there a way to avoid my model-loading step and instead load a memory image that captures the state of the machine after the models were loaded … or some other solution?
I have 8Gb memory and 512Gb SD-card.
Thanks in advance!