I was using a ssd_inception_v2_coco_2018_01_28 model and it didn’t work for me, so I had to change it to mobilenet v2.
Mobilenet in C++ tensorrt worked for you??
If yes,
Co8uld you provide me Config file for UFF and tensorflow and object detection api versions to train.
Thank you
I used python, see the link for the converter if you think it would help you: object_detection/uff_convert at master · leandrovrabelo/object_detection · GitHub
Okay.
This worked for you to convert tensorflow trained Mobilenet V2 to Uff and then to tensorRT?
Could you tell me which versions of tensorflow and object detection api did you use for training??
I’m using tensorflow 1.15.2 and tensorrt 6…0.1.10 in a Jetson Nano.
See attached the file that I used to convert and create the Engine, I used it in a Jupyter Notebook :
conversor.txt (6.9 KB)
(don’t forget to change the file paths and the classes quantities)
Did you work with c++ sampleuffsdd code.
tf- uff- tensorrt(c++) version worked for you for custom trained ssd inception v2 models???
HI @1805153908
Did you work with c++ sampleuffsdd code.
tf- uff- tensorrt(c++) version worked for you for custom trained ssd inception v2 models???
HI @yahya_qlue
Did you work with c++ sampleuffsdd code.
tf- uff- tensorrt(c++) version worked for you for custom trained ssd inception v2 models???
This one worked for me as well, Thanks a lot for providing the details about it.
just a heads up for anyone who is using this trick, it works perfectly with tensorflow 1.14 and 1.15, so if you have already changed the files in Anchor_generators folder in the tensorflow object detection model root with the same folder from previous versions (as suggested few comments above), revert it back to the original version, since you would lose a significant precision compared to training with higher versions.
So to sum up, you can use the Google Colab example provided in the latest version of tensorflow tf-1 api and then use this trick to have train your network and have a decent result compared to tf 1.12.