Im currently working on Nvidia Jetson Xavier NX development kit and want to test object detection on the device. Currently I used custom trained SSDLite mobilenet v2 model on my dataset on windows. On freezing the graph and running on xavier nx, it uses around 2.5 to 3 Gb of the memory.
Also I tried the TF-TRT conversion method described in your forum Accelerating Inference In TF-TRT User Guide :: NVIDIA Deep Learning Frameworks Documentation
but still there is no such improvements in terms of memory consumption or speed.
Also I tried various other ways but most of the are either implemented on jetson nano or on previous jetpack versions. Currently my jetpack version is 4.5.1 and the some steps on previous jetpack versions are not supported on this version
Im using tensorflow 1.15 for jetpack 4.5