Direct visual servoing based on a new basis set and switching strategy
Yecan Yin, Xiangfei Li, Huan Zhao, Wenbo Ning, Yiwei Wang, Han DingDirect visual servoing considers all pixel intensities of the entire image as inputs for robot control. Because of the high dimensionality of the image space, it achieves high convergence accuracy at the cost of low convergence domain. Recent work on the direct visual servoing decomposes the images into different signal spaces. Although a large convergence domain can be obtained, their other performance, such as convergence rate, convergence accuracy, and robustness under illumination variations, have been reduced. In other words, there exists an inevitable trade-off between the convergence domain and the other performance. To mitigate the trade-off, by constructing a new basis set with spatial-frequency properties and considering their numerical relationship during switching process, this article proposes a new and effective direct visual servoing approach. First of all, a new set of bases is specially constructed for the visual servoing rather than leveraging existing bases or transformations, and the analytical relationship between the original loss function and the transformed one is derived for the first time. Then, considering that different bases have different convergence properties, an effective switching control strategy is designed to select an appropriate basis at different states. Ultimately, a series of simulations and experiments are carried out, and the results demonstrate that the proposed servoing approach significantly outperforms the state-of-the-art approaches in terms of convergence domain, convergence rate, convergence accuracy, robustness under illumination variations, robustness under partial occlusions, and some three-dimensional scenes. In addition, the proposed approach can adapt well to different camera parameters.