Not able to train ssd-mobilenet!

hello! I want to say that I am not able to train ssd-mobilenet on my nano. First of all the gihub repo which i cloned has no ssd folder inside the jetson-inference/python/training/detection/. Secondly I am also not finding any detection dropdown choice when I opened camara-capture tool for making my own object detection data. Please help me as I am not able to train the model.

I want to add that I cloned the repo using this : git clone --recursive GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson..

Hmm if you cloned with --recursive, it should be found under jetson-inference/python/training/detection/ssd. If not, you could re-clone, or you could just clone GitHub - dusty-nv/pytorch-ssd: MobileNetV1, MobileNetV2, VGG based SSD/SSD-lite implementation in PyTorch. Out-of-box support for retraining on Open Images dataset. ONNX and Caffe2 support. Experiment Ideas like CoordConv. directly (I would put it in another directory outside of jetson-inference).

The pytorch-ssd repo is the same code that should show up under the python/training/detection/ssd submodule in jetson-inference, so it doesn’t actually matter which you use.

Did you create a labels text file, with each class name on it’s own line? And then you need to load that label file in the camera-capture tool.

Yes I have created the labels.txt file and in the camera-capture tool there are options showing up for the dataset and labels path but there is no drop down option for choosing detection.
Secondly regarding the github repo, I had cloned using recursive flag but there was no ssd directory under detection folder. I will clone the pytorch-ssd repo and will let you know if run into any errors.

Can you please tell this thing that when we point towards our dataset folder using --data flag during training then let’s say my flag is set as --data=/home/username/dataset/ then my question is that inside the dataset folder what should reside, I mean dataset folder should contain all the images with the xml files and one file of labels.txt? Is it so?

Does the control widget look like below? Or does it not have the Dataset Type drop-down at the top?

If it’s missing the Dataset Type drop-down, then you are running an old version of the repo. So you may want to clone again from GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.

You should point --data=<path> to the same directory that you used for the camera-capture tool. The detection dataset is created in Pascal VOC format and will contain several subdirectories like this:

my_dataset/
    - Annotations
    - ImageSets
    - JPEGImages
    - labels.txt

You want to point --data to the top-level directory of your dataset (my_dataset/ in the example above)

Yes I was running the older version of the repo and now I have cloned the newer version and now the problem of making my own detection dataset is solved and I have successfully completed the training but after training and converting the trained model into onnx, when I run the detecnet script then at terminal for some time some lines run down but then the screen freezes and afterwards the terminal runs very very slow but still I am not able to get camera feed running on my nano. Other issues are solved but the results are not showing up as terminal gets very very slow. So is there any solution to this problem? By the way I am running my nano in 10 W mode.

Got your point. Thanks a lot.