Detect signs with camera(opencv)

Hi, first of all, thank you for reading. We are doing a project on autonomous vehicle regarding about machine learning. We are using the NVDIA TK1 as our main component for the autonomous vehicle. However we have no clue on how to program the autonomous vehicle to detect signs such as arrows to allow the autonomous vehicle to turn itselves according to the direction given.

A thorough literature search would be part of any project, academic or industrial. What research have you done up to this point? What have you tried? Where are you stuck? If you do an internet search for

“traffic sign detection” “opencv” “gpu”
“traffic sign recognition” “opencv” “gpu”
“traffic sign classification” “opencv” “gpu”

you can find plenty of publications, and even sample code on GitHub. That wealth of material should allow you to get started.

It should be possible to build a fairly simple traffic sign classifier based on tools and components provided by NVIDIA.

You’ll need an image training set of representative signs that you want to classify.

You can train the network using DIGITS on top of Caffe. For simplicity, I would recommend doing this on a PC, not TX1. You’ll want a GPU to improve the training time.

Once the network is trained, you can “transfer” the trained model to the TX1, and run it there for example using TensorRT. I assume you can figure out how to provide the images from the camera input to the model for inferencing purposes.

The easiest way to get started with DIGITS IMO is to follow the instructions here:

[url]https://devblogs.nvidia.com/parallelforall/nvidia-docker-gpu-server-application-deployment-made-easy/[/url]

Can I train from Jetson TX2 if I installed GitHub - NVIDIA/caffe: Caffe: a fast open framework for deep learning. caffe-0.16 branch?
(better without DIGITS)

My host PC does not have a NVIDIA GPU so…