Camera calibration is a task as old as computer vision, but still today experience and supervision are crucial for getting valuable and reliable results.
How to automate the calibration process via a robot for example? How to enable unskilled operators? In other words, how to put the calibration experience in an algorithm, without any compromise on calibration quality?
What you see in the video down here is exactly this: an automated camera calibration process that enables robots or unskilled operators to retrieve intrinsic and distortion parameters both of excellent quality (under objective metrics) and with a minimal set of camera shots. In this case the chessboard is moved manually, but having a 6DoF robot wouldn’t change much.
After grabbing 3 initial shots of the chessboard with free poses, the tool iteratively suggests the next best pose of the chessboard in order to maximize the quality of the estimation over the intrinsic and the distortion parameters.
In the upper right part of the view there is the plot of the reprojection error measured on an independent test set of images (red line: 95% quantile, blu line: mean); the error reaches a stable (and excellent) value with as low as 6 images. Our tests demonstrate that with common optical configurations, the automated process ends up producing a calibration similar to the one produced by the skilled operator, but with a lower number of images. However when the optical specs become challenging (especially for focal length or distortion parameters), the automated process outperforms the skilled engineer.
Under the hood there is a tricky non-linear global optimization over the estimation of the reprojection error on the next pose, that got solved also by means of simulated annealing and evolution strategies. The same technique can be reworked and applied to other calibration tasks, e.g. for the extrinsic parameters of a stereo camera. Enjoy!