Measuring Performance of Custom Object Detection Model

I have trained a custom object detection model on jetson nano by following hello ai world retraining ssd-mobilenet tutorial. Now i want to measure my model’s performance metrics on training and testing set. What i want to accomplish is that when i run the inference on the train images and test images, i want to write detection results(class id, conffidence,bounding box coordinates etc.) to a file for every single image. My problem is when i run inference like this --model=$NET/ssd-mobilenet.onnx --labels=$NET/labels.txt --input-blob=input_0 --output-cvg=scores --output-bbox=boxes “$IMAGES/myimg_*.jpg” $IMAGES/test/myimg_%i.jpg
which python file is running on the jetson-inference folder. Because i made some changes in jetson-inference/python/examples/ but nothing happens, even i made a typo but program works correctly. Clearly jetson-inference/python/examples/ is not running. I’m extremely confused about which file is running. Can tell me which file so that i can edit that file for my goal, or any other solution.

Hi @mglaaa, after you make a change to, re-run this:

cd jetson-inference/build
cmake ../
sudo make install

This is so that your changes get copied to the versions in the build directory and under /usr/local/bin

If you launch python3 from the jetson-inference/python/examples/ directory, then you should see your changes take immediate effect without needing to run the above (i.e. as long as your terminal’s current working directory is jetson-inference/python/examples/)

You can also make a copy of and call it by your own name to avoid this confusion of the multiple files.

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.