So I understood it is good to inference a model trained by DIGITS on Jetson.
But here, I feel like Jetson TX2 is optimized for 3 tasks; classification, detection, and segmentation because Deep Vision Runtime Inference API ([url]https://rawgit.com/dusty-nv/jetson-inference/master/docs/html/index.html[/url]) states those 3 nets.
Then what should I do if I want to do something like keypoint detection? I cannot even find proper option for dataset creation on DIGITS.
Maybe is it the best way to utilize those 3 nets provided by jetson-inference ([url]https://github.com/dusty-nv/jetson-inference[/url])?
(BTW, I Tensorflow object detection was very very slow…)
Deep Vision Inference also has the tensorNet base class which can be used as a generic model. This mirrors the DIGITS approach which allows users to create Image Classification, Object Detection, Segmentation, or “Other” datasets and models:
In addition to using tensorNet directly, you can create your own subclass for keypoints like detectNet or segNet. These subclasses can be useful for containing any of the pre- or post-processing required by the network.
Here is an example of using DIGITS “other” type option creation: [url]https://github.com/NVIDIA/DIGITS/blob/016d7c080cdf9d17e3317331ad911f52a20e13c2/examples/siamese/README.md[/url]
The other way, if DIGITS doesn’t fit your type of processing, is to use Caffe directly for training. Meanawhile TensorRT provides the fastest inference.
You might also want to consider the traditional image feature detection functions, which are available as part of the visionworks and visionworks-sfm packages. This is CUDA optimized code that returns image “feature points” and can run in real time on high-resolution imagery on the Jetson with capacity to spare.
Is Vision Works CUDA enabled by default when installing JetPack 3.3?
You can also check the GPU utilization with tegrastats.