HELLO ALL, I’m using Jetson nano. Here, i captured, trained my own images for a trial project. When i was trying to detect my trained object it wasn’t detecting. That is, the prediction bounding box was not showing up in the camera and not showing the accuracy of my trained object.
Hi @srikoushikkamal, are you following the Hello AI World tutorial? If so, when you run your custom model with detectnet, you can use the
--threshold argument to lower the detection threshold and see if the object is then detected. For example, you can set
--threshold=0.25 (the default is 0.5)
There is also this
run_ssd_example.py script that can run your PyTorch model before you export it to ONNX/TensorRT. That can help you determine if the model has been trained enough or not.
Also, how many images are in your dataset?
Hey @dusty_nv , i’m watching your tutorials, as your ep3 says how to train and detect own model images & i did that.
i’m having around 20 images of a particular object.
Thanks for the reply …!
When & where the threshold must be changed. and also will try running the ssd_ex
You may need to collect more images and train your model again. In my videos, I collected 100 images per object. You may need to collect more than I did if your object is small or difficult to discern from the background (or your camera is moving, ect).
Also, after you collect more data and train your model again and export it to ONNX again, delete the *.engine file in your model’s folder. Otherwise TensorRT will use your old model instead of the new one.
The threshold option is changed when you run the inferencing program - detectnet/detectnet.py
Thank you @dusty_nv ! Will get back after trying it out!
Yes @dusty_nv , We got your solution right. (We deleted the .engine file and retrained).