节点文献

自主导航车局部地图创建研究

Research on Local Environment Mapping for Autonomous Vehicle

【作者】 梁雄

【导师】 唐琎;

【作者基本信息】 中南大学 , 控制科学与工程, 2011, 硕士

【摘要】 自主导航车能够准确并且可靠行驶的前提条件是各种车载传感器能感知环境并且有效地分析计算及处理数据。目前,仍没有一种传感器能够提供环境完全且可靠的数据,因此多传感器数据融合是自主导航车感知环境的关键技术之一本论文旨在研究基于多传感器的自主导航车环境感知策略,将单线激光测距仪检测障碍物的功能和单目摄像机成像检测识别路面区域的功能结合起来,实现两传感器的数据融合,创建一个较为完善的局部地图。在未知环境下的地图创建一直是当前自主导航车的研究热点,特别是在复杂环境下的环境感知问题,这将有益于提高自主导航车的智能水平。论文研究的主要工作有以下几个方面:1)分析激光测距仪采集的数据,将激光二维图像的几种分割算法进行比较后,提出改进的自适应分割点检测算法,算法在近距离处使用线性阈值算法,而在远距离处使用自适应分割点检测算法。在进行了合理的聚类后,实现了普通噪声点的剔除,并将聚类后的点进行直线提取,实现了障碍物的结构化显示。2)提出了帧序列剧烈变化检测算法。该算法通过分析激光测距仪数据的帧序列,若是当前帧的点个数接近激光测距仪每一帧能扫描的最大点个数时,并且当前帧相比前一帧的聚类数发生剧列变化,则该帧为异常数据。3)结合Matlab摄像机标定工具箱和OpenCV摄像机标定函数,实现摄像机标定,并将标定得到的内参数和外参数用于实现逆透视投影变换,同时利用摄像机标定得到的畸变系数将原图进行畸变矫正后再将路面区域感兴趣的像素进行逆透视投影变换转换,使像素坐标到实际物理坐标的计算结果更加精确。4)利用标定物基于实际物理距离求解出旋转平移参数,将激光测距仪坐标与摄像机坐标统一,实现了多传感器数据的配准,建立了单线激光测距仪与单目视觉摄像机的数据融合系统。

【Abstract】 Environment detection based on multi-sensor is the pre-condition that autonomous vehicle can drive precisely and safely. By far, there is no one single sensor can provide complete and reliable data of the environment, so data fusion is one of the key technologies for local environment mapping.This thesis is about research on the mapping strategy based on multi-sensor. It uses data fusion to establish a reliable local map. It’s a hot research on establishing map under un-constructive environment, especially complicated environment in the city, and it will improve the intelligent level of autonomous vehicle.The main contributions and works are described as follows1. This thesis proposes a combination algorithm based on ABD (adaptive breakpoint detector) algorithm and linear threshold segmentation algorithm after some comparisons on several segmentation algorithms. After segmentation, it is easily to detect the noise point based on clusters, then shows the constructive environment using line extraction algorithm.2. This thesis proposes a new SCSD (Sharp Change Sequence Detector) algorithm to detect abnormal data based on tremendous difference between frames by analyzing the sequence of LRF data when the points of current frame is nearly maximum.3. Matlab camera calibration toolbox and OpenCV camera calibration method are used to get camera parameters, then un-distorts the image by distortion coefficient and uses IPM (Inverse Perspective Mapping) to convert the ROI (Region of Interest) pixel to physical distance precisely.4. Using rotation matrix and translation matrix of two Cartesian coordinate planes to unify the LRF and camera coordinates, and establishes a local environment map. It takes advantages from two sensors.

  • 【网络出版投稿人】 中南大学
  • 【网络出版年期】2012年 01期
节点文献中: 

本文链接的文献网络图示:

本文的引文网络