I’m looking to use Jetson for training a classifier. There are some tools available in OpenCV to do this. Are these accelerated to use the GPU? If not what is the best way to get accelerated versions?
I did not want to reinvent the wheel, but if the tools are not available I may look into porting them to use available OpenCV GPU functions, if the is possible.
This is a pretty broad question. There are two ways that OpenCV is accelerated on the Jetson, GPU and CPU. The CPU optimizations are used to take advantage of the multi-core architecture. There is a case where CPU accelerated support is not available, which is using the non-free libraries SIFT and SURF. In the non-free library case (which are patented by others), most people end up either cutting/pasting the needed parts and using OpenCV4Tegra, or they build OpenCV from scratch and forego the CPU acceleration. In either case, GPU acceleration is still available.
In the case of a classifier, how much it takes advantage of the GPU is a function of the classifier code itself. Assuming that it sticks with the standard calls, it should be GPU accelerated. However, it’s on a case by case basis. Typical GPU speedup is through functions such as FFTs and such. For example, a Haar Feature-based Cascade Classifier should be GPU accelerated.
I’ve reread http://elinux.org/Jetson/Computer_Vision_Performance. I had not understood there was a alternative binary for standard openCv. I had assumed that to get GPU acceleration the code had to use the GPU API.
So my question is, for a standard jetson installation of openCV are tools like opencv_traincascade accelerated?
Or will I have to tinker under the hood to link the tool against the accelerated library?
If tinkering is required has anyone done it already?
(I kind of hope there is some tinkering required as I would like to do a before after comparison, out of interest)