Steps for TensorRT implementation on mobileNet-ssd with Jetson tx2

Hi, i’m new to deep learning field. but i have worked in mobileNet-ssd and created a custom model to detect person only(and its working successfully). So my caffemodel and deploy.prototxt is ready. I have a Jetson-tx2 board. I’ve installed TensorRT in it. But i dont know how to work with implementation of this mobileNet-ssd’s caffe model(or any caffe model, but im suppose to work only in mobileNet-ssd) with TensorRT. I’ve followed https://github.com/chuanqi305/MobileNet-SSD method for model creation. I’ve seen some posts on mobileNet-ssd here, but each of them starts from middle or with issue. I’m not getting a complete step-by-step procedure for the it.

Sorry that i’m asking like this, because i’m a fresher to this field and have been sitting for these for days.

Please help me…

Hello,

I’d recommend starting in the jetson forum, specifically: https://devtalk.nvidia.com/default/topic/1029806/jetson-tx2/convert-tensorflow-model/

Thanks, thanks a lot. I have gone through it, and reached till here - https://github.com/NVIDIA/DIGITS . and installed DIGITS,now planning to train my custom dataset with DetectNet(by following the steps mentioned in link for label creation, etc). so am i following it right? My main aim is the MobileNet-ssd model in caffe running in TensorRT.

Hello,

You are on the right path. Also, consult the Jetson forum if you have specific mobile implementation questions.

https://devtalk.nvidia.com/default/board/139/jetson-embedded-systems/

This is my code,


I hope this can help you.