Didn't Detect object! SSD MobilenetV2 and detectnet inference

Hello guy!
I was trying Object detection,

Step’s:

  1. download the dusty jetson inference download the code
  2. capture my input and annotation
  3. train the model using ssd-MobilenetV2
    And get 5.89 loss is good checkpoint for my concern
  4. so I was convert onnx file
  5. finally detect the object using

This process work in general coco dataset ! but custom dataset all process are good and stream the tensorRT video but didnot detect the my custom object

What is the reason! And suggest any solution guys!

Hey @VK01 ! I also came across this problem even though it has worked in the past. May I know if you were able to fix it?

1 Like

Yes buddy!

I tried new one!

  1. You are annotating single object in one frame! Its might get give good result with also detect object

  2. And also give more data with labeling

I got good result buddy

Are you still using jetson inference? I am using labelimg to annotate ‘Tea bottle’ and ‘Soya Bottle’ but after training and exporting it to onnx I encountered the same issue as you previously. It detects coco dataset and changes the names to ‘Tea bottle’ and ‘Soya Bottle’. May I know what to do in order to fix this? I am training a custom dataset and only want to detect Tea and Soya bottle but not the coco dataset.

Thank you!

1 Like

Ok! I am still using jetson nano

Camera capture /dev/video0 command
i was annotating work instant of jetson board using above the command

already develop ‘camera capture code’ in dusty jetson inference github

Try this video https://youtu.be/2XMkPW_sIGg buddy!

Check your data and model path, check data and label format

Thanks alot! I’ll try these out and see if it fixes the problem.