Object detection on jetson xavier nx development kit

hi there,

Im currently working on Nvidia Jetson Xavier NX development kit and want to test object detection on the device. Currently I used custom trained SSDLite mobilenet v2 model on my dataset on windows. On freezing the graph and running on xavier nx, it uses around 2.5 to 3 Gb of the memory.
Also I tried the TF-TRT conversion method described in your forum Accelerating Inference In TF-TRT User Guide :: NVIDIA Deep Learning Frameworks Documentation

but still there is no such improvements in terms of memory consumption or speed.

Also I tried various other ways but most of the are either implemented on jetson nano or on previous jetpack versions. Currently my jetpack version is 4.5.1 and the some steps on previous jetpack versions are not supported on this version

Im using tensorflow 1.15 for jetpack 4.5

Hi,

Is TensorRT an option for you?
If yes, it should save memory since you don’t need to load TensorFlow at runtime.

Please find the below document for the detailed steps:
https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-713/developer-guide/index.html#samplecode1

Thanks

Hi,
Should I convert my .pb file to uff or to onnx ? I read somewhere that uff will get deprecated. Please guide me step by step to use TensorRT.

Hi,

Since you are using TensorFlow 1.x, you can try the uff way to see if it works.
You can also convert it into the onnx model but tf2onnx seems to have better support with TFv2.

Thanks.