Hi,
I am new to ML / Deep learning and trying to get started with the nano. For my application I need a light weight network. From my understanding yolonet or mobilenet is the way to go.
My setup is built on the example from here GitHub - dusty-nv/ros_deep_learning: Deep learning inference nodes for ROS / ROS2 with support for NVIDIA Jetson and TensorRT
So far this has been awesome at getting me started and I can do some detection and message passing to my other applications but now I need to refine my network for my custom needs and am curious as to how I can do that?
I setup a DIGITS account and have the docker container up and running but when I select pre trained models mobile net is not an option. Is there anyway I can add more training data or refine the training data when I use the mobile network in the node_detectnet?
for example when I execute rosrun ros_deep_learning detectnet _model_name:=ssd-mobilenet-v2 and rosrun ros_deep_learning segnet _model_name:=ssd-mobilenet-v2 I want it to load my refined network.
I tried downloading the mobilenet from NGC store and uploading that to DIGITS but it complained about json files.
Any one know how to get this started?
Hi micallef, SSD-Mobilenet wasn’t trained in DIGITS, it is trained in TensorFlow and then converted to UFF with this tool: https://github.com/AastaNV/TRT_object_detection
Can I do something similar for segnet then? Train in tensorflow and then convert using the tool?
I am also slightly confused on how GitHub - AastaNV/TRT_object_detection: Python sample for referencing object detection model with TensorRT helps me… If i just follow the instructions in the readme it will convert for me? Or do I need to know anything else? I just do not see any mention of converting to UFF is all.
Hi,
Sorry for the late update.
ssd_mobilenet_v2 can be re-trained with TensorFlow object detection API:
https://github.com/tensorflow/models/tree/master/research/object_detection
YoloV3 can be retrained with darknet frameworks:
https://pjreddie.com/darknet/yolo/
For the link above, it applies .pb → .uff → TensorRT conversion and launch TensorRT with python interface.
In general, you can convert a model into uff with this command:
$ sudo python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py frozen_inference_graph.pb -o [output/file/name].uff -O [output/layer/name] -p config.py
In the GitHub, it apply the same conversion in python API directly:
dynamic_graph = model.add_plugin(gs.DynamicGraph(model.path))
uff_model = uff.from_tensorflow(dynamic_graph.as_graph_def(), [output/layer/name], output_filename=[output/file/name])
Thanks.