I would like to train network to recognize element. I would like to have sure is it this detail not other. I would like to know in pose of recognized element. Is this possible with jetson-inference or I have to use other repository. I have found some on github known as SSD-6D. I don’t know how to train. I have to shoot plenty of pics to prepare dataset to generate network ? In a bin will be only one kind of details but in various poses. I would like to use some 3d camera maybe kinect 2 to get knowleadge how far is this element from sensor and it will be easy to send correction data to robot to handle element from bin.
Is it possible ?
jetson_inference doesn’t by default support post estimation.
But you can re-train the classifier based on your custom database.
For the kinect camera, here are some possible issue recommended to confirm first:
1. If your camera supports GStreamer?
Please noticed the jetson_inference use GStreamer as camera input interface.
So you will need to confirm if your camera can support GStreamer first.
2. What kind of data format of your camera?
The default image format is YUV420. With some change, jetson_inference can support RGB/RGBA also.
However, you might need to implement the format conversion on your own if using other data format.
Thank You for answer.
I’m totally not concentrate on jetson-inference. Solving can be done using some repository etc. Maybe You know any on GitHub or any other software (preferred freeware) ?
About gstreamer for Kinect v2 it has been done.
Procedure for installing is some diff but it is to do.
How can I re-train classifier ?
For inference, you can update the GStreamer pipeline here:
For training, it’s recommended to use DIGITs.
Here is our tutorial for your reference: