But it did not work maybe because of training data.
The file examples/kitti/detectnet_solver.prototxt seems to require examples/kitti/kitti_train_images.lmdb and examples/kitti/kitti_train_labels.lmdb
but they do not exist.
I do not know how they look like.
So would you please provide some sample data or let me know how can I prepare those data?
DIGITS is just a nice GUI on top of NvCaffe. When it trains or classifies, it calls the NvCaffe command line tools.
KITTI is a data set, not a format. The format of the data here is LMBD.
Other formats area also possible to input into NvCaffe; I recommend reading all the documentation and tutorials on Caffe, and then NvCaffe, and then playing around in DIGITS to learn how these systems work. It’s worth the few dollars to rent a GPU instance on the spot market for a few hours to get this understanding. A thousand times cheaper than any college tuition :-)
No instructions are about DetectNet data preparation…
I am string reading but even though I am reading all the documents about Caffer and NvCaffe,
if I do not know how to create 4 LMDBs in exactly the same way I may not be able to train DetectNet?
Yeah, that took almost a week for me when I set that up.
Yes, because the labeling is totally different! DIGITS contains a few helper scripts to go between text-mode files with rects/classes, and the actual data loaded by the model.
I found this blog post to be helpful: Deep Learning for Object Detection with DIGITS | NVIDIA Technical Blog
It’s also helpful to study the log output from running the DIGITS training and classification passes, because it contains some words/filenames you can then look for.
After reading that, poke through in the actual DIGITS scripts/implementation and you’ll hopefully find what you need.
You need to send out a request to AWS for GPU usage first.
For the database, please check create_db.py script.
Different from the classification database, detection database writes bounding box information to LMDB at the same time.