Speed up apriltag by searching within bounding box

We need to speed up the apriltag search in our image by a factor 20x at least, there can be multiple tags in one image. Since we have a good estimation of where we can expect them in the image, this is possible by reducing the search space for each tag by searching within a bounding box (x1,y1)(x2,y2). This is not part of the API. So how can we do this efficiently?

The gpu code of april tag is closed source. So is there a way to:

  1. open up the source code so that we can add this feature ourselves
  2. have multiple instances of the apriltag cuda running on different patches of the image?

Nbr 2) is more complex than it sounds as apriltag takes the camera model, and we’d have to offset the camera model for each frame for each instance depending on the location of the bounding box in the image.

This is for a Jetson device (in a mobile robot), so scaling up the gpu is not possible.

Hi Peter,

Happy to meet you in person at the ROScon.

A good strategy is to use the Isaac ROS image pipeline and crop a part of the image: isaac_ros_image_proc — isaac_ros_docs documentation

The crop node publish:

  • New topic with cropped image
  • New camera_info corresponding to the new image
ROS Topic Interface Description
crop/image NitrosImage Cropped image.
crop/camera_info NitrosCameraInfo The corresponding camera_info of the cropped image.

Let me know.
Raffaello