Hi!
We’ve made this project for enabling our omnidirectional robot to execute soccer tasks autonomously:
In the RoboCup Small Size League (SSL), teams are encouraged to propose solutions for enabling robots to execute basic soccer tasks inside the field using only embedded sensing information for the Vision Blackout Challenge. We propose an embedded monocular vision approach for detecting objects and estimating relative positions inside the soccer field.
During soccer matches and especially for the Vision Blackout challenge, SSL objects mostly lay on the ground, and we exploit this prior knowledge for proposing a monocular vision solution for detecting and estimating their relative positions to the robot:
- Camera is fixed to the robot and its intrinsic and extrinsic parameters are obtained offline using calibration and pose computation techniques from the Open Computer Vision Library (OpenCV);
- SSD MobileNet v2 is used for detecting objects’ 2D bounding boxes on camera frames;
- Linear regression is applied to the bounding box’s coordinates, assigning a point on the field that corresponds to the object’s bottom center, which has its relative position to the camera, and, therefore, to the robot, estimated using pre-calibrated camera parameters;
- The system achieves real-time performance with an average processing speed of 30 frames per second and 10.8 Watts of power consumption.
The following figure illustrates a scheme for the proposed method:
The Nano sends target positions and kicking commands to the robot’s microncontroller through UDP socket, enabling it execute basic soccer tasks autonomously:
-
Grabbing a Stationary Ball:
-
Scoring on an Empty Goal:
Code and Documentation are available at:
Our full paper presented at the 2022’s RoboCup Symposium can be found at arXiv: [2207.09851] An Embedded Monocular Vision Approach for Ground-Aware Objects Detection and Position Estimation
Thanks!
João