Training an ssd-inception model with my own data and coco dataset

Hi everyone,

I’m working on a project on which I need to train a personal model based on ssd-inception-v2 with my own dataset which is composed of images which were labelised by myself but also images from the coco dataset. The model is trained to detect one class for the moment. I’m posting here to get some help/advice about the training part. In fact, it has been days since I’ve been trying to train my model but the mAP of it never get beyond 35% and the loss always oscillate between 4.5 and 6 after the 100 first epoch toward the end of the training. No matter what parameter I try to change in my model’s config file, the issue is still here and I’m running out of idea. If you guys could give me any advice, I would really apreciate it.

Thanks !


It’s recommended to check if the accuracy can go up with only coco dataset first.
This will help you to figure out the issue comes from setting or database.


Hi, after 10000 steps of training only the coco part of my dataset, the result is the same. I’m stuck around 4 to 6 of loss and 35% of mAP. Could it be because I converted the json annotation files to xml so that I can use the script ? Should I go back to the tfrecord creation and use the create_coco_tf_record one ? Thanks for your help


It looks like there are some issue in your training environment.
Please share the detail step you setup for training so we can give a further suggestion.


I finally decided to still try my frozens graphs even if the mAP from the training was pretty bad and it appears that it was quite wrong. I’m not as accurate as the ssd-inception-v2 from dusty’s jetson-inference repo (95% mAP on my test files whereas I’m more around 85 to 90) but the point is that I can still improve my dataset whereas the one from jetson-inference is not. Will close this topic since I’m satisfied with the result so far, will be back there if anything goes wrong. Thanks for you time.

Hello @AastaLLL ,
I have trained ssd inception v2 model using my dataset.
The detection is good in tensorflow.
then I converted to uff and tensorrt.
the detections are good there too (tensorRT python version)

But I want to use c++ version of tensorRT.

In c++, the there are no detections found and no errors got.

Could you please let me know the possible solution for this??

Hi god_ra,

Please open a new topic for your issue. Thanks

Hi @kayccc,
I have opened new topic long back. But i have not received any update from NVIDIA.
It is been almost a month since I am waiting.

Please look into these topics and solve the issues.
I need solutions for my custom trained ssd inception v2 2017_11_17 network to ONNX / UFF conversion and then to TensorRT (c++ version)

Please solve this issue.

Hi god_ra,

Here is the forum for Jetson platform related, if your issues are related/happened on Jetson platform, then we will help to provide suggestions to resolve them. If not in Jetson forums, then they’re not in our radar to give any comments.


I am using jetson nano.
I am unable to convert my ssd inception mdoel to tensorrt inference in c++ verison.

I have posted multiple times here. but no response from Nvidia or jetson community.

I am waiting for a solution since almost 2 months.

tensorflow - uff - tenorRT (c++) version does not work
tensorflow - onnx - tensorRT (c++) version does not work.

I need solution for any of these two problem.

I need to port my model to UFF/ONNX and run inference in tensorRT (on jetson Nano platform) in c++ version.

Please try to resolve this issue as early as possible,

Thank you

Hi god_ra,

I have moved those 3 topics into Jetson Nano forum, will have engineer to take a look.

Thank you so much @kayccc

Hope to get some solution for my problem.