I am attempting to develop a system that detects the position of fuses. The fuses I have vary in color and numbering. While labeling the image, I was unsure if this would be the best method because I remembered that the images might be converted to grayscale, which could disregard the important color channels. Is it worthwhile to use the technique demonstrated in this tutorial? Dusty
Does the input images looks like a static image with just variation in position of fuses in some ROI part of the image? Does each color is associated with a unique number on fuse?
Yes, the only difference between the images is the position of the fuses, and each fuse color corresponds to a specific and unique number. The project aims to check if the fuses are in the correct position according to the vehicle’s configuration.
I meant, in each input image the bounding box position is predefined. So we need to check if any fuse present in that bounding box position. This is possible if we always get similar input image from static camera with same distance/angle. If that is the case, we can just look for pixel color in the bounding box to map fuse type/number. So we get a fuse vector to compare with expected configuration
The image can change the angle and distance. A strategy I intend to implement is for the operator to point the camera at the fuse panel and at the same time indicate if they are in the correct position depending on the configuration.
But can you confirm if the Jetson inference takes the colors of the fuses into account?