节点文献

基于全景视觉的移动机器人SLAM方法研究

Omni-vision Based Simultaneous Localization and Mapping Research for Mobile Robot

【作者】 吴叶斌

【导师】 朱齐丹;

【作者基本信息】 哈尔滨工程大学 , 控制理论与控制工程, 2011, 博士

【摘要】 在未知的环境中,如果机器人只利用里程计预测自身所处的位置,随着时间的积累误差会越来越大。因此,为了给机器人精确的定位需要用激光传感器或者视觉传感器等外部传感器从周围环境中提取路标,利用这些路标创建出周围环境的地图,利用地图信息来修正机器人的位置。为了得到机器人的精确位置需要利用环境地图对里程计数据进行修正,为了创建精确的环境地图机器人要知道自身的精确位置,这类问题称为同时定位与地图创建问题,其中包括机器人定位、特征提取和地图创建等技术,需要在准确性、鲁棒性和实时性三个方面取得突破。由于视觉传感器具有信息量丰富,采样周期短等优点,近年来在移动机器人的导航领域得到了广泛的应用。目前基于视觉的同时定位与地图创建技术主要采用的是普通视觉传感器,但是,普通视觉传感器视野范围狭窄,仅能观测前向60°的信息,对视觉路标的连续观测和跟踪能力有限。全景视觉传感器具有全向360°的感知范围,视觉路标可以再其视野范围内停留更长的时间,增强了视觉传感器对路标的连续观测和跟踪能力。本文对基于全景视觉的移动机器人同时定位与地图创建方法进行了研究。首先,建立了基于全景视觉的移动机器人SLAM系统感知模型,依据全景视觉系统的成像原理得到视觉路标在地面的投影位置,根据基于视差原理的双目立体视觉三维测量方法和机器人在两幅全景图像中的位姿,从而获得了视觉路标在全局坐标系中的位置。其次,对特征匹配条件作了改进。原匹配算法在特征匹配过程中存在着大量的错误匹配,因此对原算法作了如下改进:1)如果匹配结果中存在的两个以上的特征点与另一幅图像中的同一个特征点相匹配的的情况,则比较它们的特征描述符的欧氏距离,保留欧氏距离最小的匹配点对,删除其他匹配点对;2)对于其他误匹配情况则采用角度限制和长度限制的方法来去除。即把匹配成功的特征点在两帧图像的坐标系中的角度变化的绝对值与所有匹配成功的特征点的角度变化的绝对值的平均值进行比较,如果相差较大则认为匹配有误,删除匹配点;求出所有匹配成功的特征点之间的欧氏距离的平均值,将与平均值差距较大的匹配点对删除。实验结果表明改进后的算法提高了匹配的准确性,消除了错误匹配对SLAM的影响,增强了SLAM系统的鲁棒性。再次,将基于全景视觉的特征提取方法与EKF和FastSLAM算法相结合,提出了基于全景视觉的SLAM算法,该算法利用改进的SURF算法提取视觉路标,根据系统的观测模型定位出路标位置,进而通过EKF或FastSLAM算法同时更新机器人位置和地图信息,仿真实验结果证明了全景视觉相对于普通视觉的优越性,通过真实的机器人试验证明了本文所设计的算法的可行性。最后,创建了基于SURF算法的特征地图库。地图库中的特征随着时间的推移会越来越多,把图像的特征点与地图库中的所有特征点进行匹配需要大量的时间,将给实时特征匹配带来困难,甚至计算灾难问题。因此本文的特征地图库有许多子地图构成,通过一个代价函数选择与当前全景图像进行匹配的子地图,以保证机器人不但能够得到充足的视觉路标,而且能够增强SLAM的实时性。

【Abstract】 If the robot only uses odometer to predict the location of itself in unknown environments, error will be more and more larger as time goes on. Therefore, In order to accurately localize the robot, we need to use vision sensors, laser sensors or external sensors to extract landmarks from the surrounding environments, build a map based on these landmarks, use the map information to update the location of the robot. In order to get the exact location of the robot, we need to use the map of the environment to update odometer data of the robot.In order to build an accurate map of the environment the robot should know its exact location, such problem called simultaneous localization and mapping problem, including robot localization、feature extraction and map building techniques, need breakthrough in three aspects:real time, robustness and accuracy.Because vision sensor has the advantages of abundant information and short sampling period, have been widely applied to mobile robot navigation in recent years. Nowadays, vision-based simultaneous localization and mapping technology is mainly based common vision sensors. However, the observation view of common vision sensors is narrow, it can only observe60°range of environmental information in the forward direction, continuous observation and tracking capabilities on visual landmark are limited. Omni-vision sensor has a sensing range of360°, Visual landmark can stay longer within its field of view, so, enhanced the ability of continuous observation and tracking on visual landmarks. In this paper, Omni-vision based simultaneous localization and mapping method for mobile robot is studied.First, SLAM perception model of mobile robot is established. According to the imaging principle of Omni-vision system, the projection location of the visual landmarks on the ground is obtained. According to the three-dimensional measurement method of binocular stereo vision based on principle of parallax and robot pose in the two omnidirectional images, visual landmark location in the global coordinate system is obtained.Second, feature matching guidelines are improved. The original matching algorithm has a large number of error matches in feature matching process, therefore the original algorithm is improved as follows:1) If there are two or more feature points extracted from one omnidirectional image matched with the same feature point extracted from another omnidirectional image, compare the euclidean distances of feature descriptors, keep the matching points which have minimum euclidean distance, delete the other matching points;2) angular limitation and length limitation are used to remove mismatches of other conditions, that is compare absolute value of angle change of successful matching feature points in two image coordinate system with the average absolute change in angle of all matched feature points, If the absolute angle change of two matching feature points has larger difference with the average value, remove the two matching feature points; calculate the average euclidean distance of all successful matching feature points. Remove matching points which has larger difference with the average value. Experimental results show that the improved algorithm improved the matching accuracy.eliminated mismatching influence on SLAM., enhanced robustness of the SLAM system.Third, Omni-vision based feature extraction method combined with the EKF and FastSLAM algorithm, omni-vision based simultaneous localization and mapping algorithm is introduced. The presented method uses improved SURF algorithm to extract visual landmarks, According to the observation model localize the position of the landmarks, and then updates the robot position and map information by EKF or FastSLAM algorithm. The simulation experimental results show the superiority of omni-vision sensor relative to the common vision sensor. The feasibility of the algorithm is proved by the real robot experiment designed in this paper.Finally, established feature map database, there are more features in map database as time goes on. To match feature points of the image with feature map database will need a lot of time., this will create difficulties for real-time feature matching, or even bring computing Disaster. Therefore, feature map database structure by sub-maps, select a sub-map through a cost function to match with the current panoramic image. To ensure that the robot can not only get enough visual landmarks, but also to enhance the real-time of SLAM.

节点文献中: 

本文链接的文献网络图示:

本文的引文网络