The training started well and anchor points were generated. However, the only problem we faced is that it is not accepting JPEG images. At the step where it generates validation dataset out of training dataset, it kept telling that “Total 0 samples in kitti training dataset” despite having 800 images there. We tried everything, set JPEG in every spec file. Still the same issue. We converted our images to PNG and reuploaded and it worked. Now, the only problem is that the PNG converted from JPEG is increasing file sizes to 9-fold. Have you come across this issue before? any suggestions?
Hi Morganh, gave it a million tries, somehow it just won’t accept any other format.
Any way to use coco dataset instead of kitti to train the yolo? if yes, how can i go about it? is there any example jupyter notebook available for that?
I could not solve my png/jpg issue but got the results we wanted to by being able to convert to a png file without increasing the file size. Now I was able to export the mode with 93% mAP which is a great start (I believe) with a small subset of my data. When I export the model, I am getting everything I need except for prototxt. It is probably because prototxt is only for caffemodel. My questions is, will I be able to use following example to run inference on Jetson Xavier? deepstream_python_apps/apps/deepstream-test2 at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub (it asks for protoxtx)
I found some github examples that are for Tao Toolkit + deepstream, however, the code is in c++ and i am looking for something that is: Python and also has an example with upto 3 sgie models.
Any suggestions?
My other question is, the Deepstream examples I mentioned above include only caffe models. Does this mean that natively Deepstream supports Caffee models only?
Also, if anyone else here looking to convert Yolo dataset to Kitti dataset, please use Voxel51. It works like a charm. I used another solution on github and wasted 3 precious days just to find out that the labels were converted wrong.