Keras MobileDetectNet (Fast Object Detection on Jetson Nano)

Hello, after working with various object detection networks on the Jetson platform, I decided to create one with an emphasis on being easy to train and running at high FPS with low memory, aimed at hobbyist and maker projects.

Meet Keras MobileDetectNet, a network with ~300K parameters which can run at 55 FPS on the Jetson Nano using TF-TRT. It is simple to train (often producing usable results in < 50 epochs) using KITTI label format just like nVidia DIGITS and includes robust online image augmentation. Even with a small dataset of 1-2K images it manages to perform well for a network with such a small amount of parameters, perfect for hobbyist projects which need object detection. This is partially thanks to its utilization of Faster R-CNN’s anchor system, which provides much more robust bounding box regression results.

End to end source code is provided for training and inference, including how to optimize the graph with TF-TRT: GitHub - csvance/keras-mobile-detectnet: Fast Object Detector for the Jetson Nano

1 Like

Vancecs - I love this! Keep educating us! I want to be able to do my own projects like this, not just follow a step by step example.

How did you install imgaug?

I’m trying to run your train.py --help script, but I got notified that I don’t have some of the dependencies. I’ve installed plac via pip3.

After that, it gave me a ModuleNotFoundError for ‘imgaug’. When I was trying to install imgaug through pip3, but it couldn’t find Shapely. I installed geos through sudo apt-get install libgeos-dev, and then Shapely through pip3 install shapely, but when I hit pip3 install imgaug, it tells me that imgaug requires opencv-python.

imgaug is not needed for inference, only training. There is no need to install it on the Jetson. Assuming your training system is x86_64, it should install flawlessly via pip3.

I’m trying to train and I used this command:

python3 train.py --batch-size 24 --epochs 500 --train-path ~/my/folder/train --eval-path ~/my/folder/val --workers 4

but I get the following error:

File "/home/angelo/keras-mobile-detectnet/generator.py", line 99, in __getitem__
old_shape = image.shape
AttributeError: 'NoneType' object has no attribute 'shape'

How do I get around this?

Usually this indicates an issue with the path to images. Could also be a non image file present in the directory (like thumbs.db or .DS_Store for instance)

How is your directory structure for your training data? It should match the KITTI format: DIGITS/digits/extensions/data/objectDetection at master · NVIDIA/DIGITS · GitHub

–train-path and --val-path should point to the directory which contains both an images and labels directory.

In our case, there were hidden files.

For anyone that encounters the same issue, just go to the directory, and then hit Ctrl+H, sort the files by name and delete everything with ‘._’.

I got a new error in generator.py

'seq_augment' is not defined

The line in question is:

image_aug = seq_augment.det_image(image)

We tried using:

ia.seq_augment
iaa.seq_augment
ia.seq.augment
iaa.seq.augment

but all to no avail.

That line isn’t in generator.py in the current master branch as best I can tell (last updated July 13th 2019).

Would you mind posting the full stack trace?

It looks like something that may have been in a previous version however. I would recommend doing a git pull if that is the case, or if you made changes to stash them using git stash and then use git pull to update your copy.

You’re right, sorry. I was about to open another issue ticket on your github, then I saw the questionable line of code.

I think it’s training now and it’s processing the bounding boxes at line 113. May I get a ballpark figure on how long this process is gonna take?

EDIT:

You guys might have to modify generate.py. In my use case scenario, my label files were being parsed with an extra blank line in load_kitti_label’s split method

In the for loop, insert the following:

for row in label.split('\n'):
    fields = row.split(' ')

++fields_length = len(fields)
++if fields_length == 1:
++++continue

This will make sure that the blank whitespaces WON’T be processed and parsed! Otherwise, you’ll get an Index Out of Bounds Error in:

bbox_truncated = float(fields[1])

Hi,
Could you please tell me which dataset you used to train and infer? If you could post a link to it then it would be great too :)

Hi,
Could you please tell me which dataset you used to train and infer? If you could post a link to it then it would be great too :)

Good catch! The easiest fix is just calling .strip() before splitting the lines. I committed this to the master branch. I will also looking into dealing with hidden files better.

Hi shriramhr,

I trained it with a subset of OpenImages: Open Images V7

Here is a notebook which gets all images with the squirrel class and their bounding boxes and structures the data in KITTI format for training: https://github.com/csvance/keras-mobile-detectnet/blob/master/kitti/openimages.ipynb

How do we edit inference.py to take in camera input (webcam or raspberry pi cam) and display it in a window?

Hey angelo,

I am working on putting together an example using it in realtime with OpenCV for a demo on Friday. Will post an update when the code is complete. But if you are interested, I have a working motion detection script with realtime display available here that could be used as a reference: https://github.com/ieee-uh-makers/pi-workshop/blob/master/python/lesson2_motion.py

Please do. I’m able to edit the existing inference.py code to take webcam video input for inference (I’m Zeit42 on github), but it doesn’t work.

What I tried doing was saving the webcam input as jpeg images in a folder, and then running the original inference.py code on that folder. I was able to get results on the images in the folder, but not in my realtime object detection.

Hi Angelo, here is a notebook which allows you to visualize everything in realtime from a camera:

Right now it just uses Tensorflow for inference, but it wouldn’t be difficult to adapt it to use TF-TRT for optimization (see inference.py for an example of this)

I converted it to normal python code that can be run from the terminal itself.

Although, if you’re going to use TF-TRT, you need the following lines:

tftrt_engine = keras_model.tftrt_engine(precision=inference_type, batch_size=1)

classes, bboxes = tftrt_engine.infer(batch)

instead of:

bboxes, classes = tf_engine.infer(batch)

What I did in my code is added an argument for TF or TF_TRT, and then create the engines as needed, as well as adjust the classes, bboxes as needed


Hi vancecs, I ran the train.py. It got no errors but the program stuck at the “Epoch 1/10”.
I modified the batch_size=1, epochs=10. And the path direct to the folder include the images and labels floder.
What eles should I do? Thanks a lot.