Well its because its low cost and we can buy two of them for little money. Check out https://www.plymouth.ac.uk/research/robotics-neural-systems/plymouth-owl.
We have been competing in international robotics championships (RoboCup and FIRA) since 2007 on kid-size robots. But they are too small to have any serious processing inside them for vision work. We have been developing a teen-sized robot, Scott, since 2016 but progress is slow. Check out https://www.facebook.com/pages/category/College---University/University-of-Plymouth-Robot-Football-148285347160.
We are re-engineering the control systems, and using a Xavier for the vision processing (I managed to get the dept. to buy one). The head is a re-engineered OWL head with agile steerable stereo cameras. We use these for teaching (module code AINT308), and expose students to depth by vergence and also by disparity using the OWL within OpenCV. Also they implement the Itti & Koch saccadic eye motion model for salience, a cognitive psychological model. In this way I go some way to prepare them for a future in computer vision with autonomous robots. This is after they have been given a grounding in OpenCV.
We are adopting the salience model to control the stereo eyes of the Scott robot during a robot football match, to guide the VGA or HD resolution cameras to pan and tilt over a 300 degree neck rotate, and generate a visual map of the scene. We plan to use saccades to update this map. This will be the basis of our executive control unit in the robot, to allow the robot to generate and update its SLAM map of the pitch, the ball and the players.
In addition, we want to implement smooth pursuit for the eyes to allow us to track a ball. This involves a complex set of decisions to change the tracking mode from target tracking using an optical flow field, to vergence tracking and perhaps also disparity depth analysis and Kalman filtering to follow the ball. This will feed into the decision making processes to allow the robot to react to the ball.
The cameras can be run at 90fps with VGA, but the ov5647 has a serious shortcoming of a rolling shutter. But we have to live with that. I already have signed an NDA with Omnivision (some years back) for the full camera datasheet. So we are in a good position to develop a driver for it.
We are using the cameras from Adafruit (the spy camera version) that are very lightweight units. This means that we can shift them using high speed servos at about the same rate as humans can (we can do 450 deg per second, humans at their fastest 900 deg /s), which is fine for normal tracking purposes.
In tests using stereo disparity examples from Visionworks and OpenCV 4 I can get the Xavier processor to run at 100fps on the disparity calculations, and from the OWL stereo cameras (fed from a Pi doing the camera streaming via MJPEG and VideoCapture API in OpenCV on the Xavier host) at 20fps.
So I have an expectation that by placing the cameras on the Samtec CSI2 interface of the Xavier, and installing a V4L2 driver I should be able to get 80fps from the stereo camera pair. This might then be fast enough to mitigate the rolling shutter for my robot football application. We already have a vision processing tree from our kid-sized robots that we need to port and update to the Xavier.
We are getting the hard stuff done, camera interface etc., done first, then students can get involved in the easier stuff (Kalman filtering, optic flow, SLAM etc.), and take the robots to competition.
Sorry that was rather long!!