节点文献

全向视觉传感器标定

Calibration of Omnidirectional Vision Sensors

【作者】 林颖

【导师】 刘济林;

【作者基本信息】 浙江大学 , 通信与信息系统, 2013, 博士

【摘要】 本文讨论的全向视觉传感器包括被动全向视觉传感器-全向相机和主动全向视觉传感器-全向激光雷达。此类传感器由于具有较大的视场范围,被大量应用于基于地面自主平台的环境感知。其特殊的成像特性使得该类传感器的标定一直是计算机视觉领域的基础性研究问题。本文主要研究全向相机和全向激光雷达的标定方法,以全向相机的标定、自标定,雷达-相机系统的外部参数标定这三个方面作为研究重点。利用棋盘格标定板在单位视球上的几何性质进行全向相机精确标定,提高了结果的精度;将基于稀疏表示恢复低秩纹理的思想应用于全向相机自标定,方便地得到比较可靠的标定结果;采用几何结构约束、运动估计等手段,解决全向相机和全向激光雷达的联合标定问题。本学位论文主要工作和创新之处在于:1.提出了一种基于单位视球的全向相机标定方法,来提供比较精确的二维三维信息对应关系。该方法利用棋盘格标定板中两组相互垂直的平行直线在单位视球上的几何特性,推导出内部参数和外部参数的闭合解。与现有的大多数标定方法相比,本方法依靠更为精确的内参、外参估计,进一步降低了标定结果的不确定度。2.提出了基于稀疏表示的全向相机自标定方法,通过简单的标定场景对传感器进行快速校验。该方法利用单张图像,通过恢复图像中的空间低秩纹理,对相机进行自标定:并根据全向相机的成像特性,定义了一种有效的描述球面大视场范围低秩纹理的投影方法。和大多数的自标定方法相比,本方法不依赖于边缘、角点等局部特征,受遮挡、模糊、光照等外部影响较小,标定结果的可靠性更高。3.提出了两种基于自然场景的雷达-相机系统外部参数标定方法。和立体相机相比,全向激光雷达和相机组成的系统在进行场景构建时具有计算复杂度低、准确度高、受环境影响小等特点。而对雷达-相机系统进行外部参数标定是有效结合两个传感器数据的前提。该方法根据标定场景中的三面体定义参考世界坐标系,利用三面体结构约束、图像间运动估计等手段求解雷达、相机坐标系相对于参考坐标系的位置关系,来得到两传感器之间的外部参数。本方法比大部分方法更为灵活,不需要特殊的标定物体,对手工输入信息依赖度较低,仅需要两帧数据,即可得到较为准确的结果。

【Abstract】 The omnidirectional vision sensors discussed in this dissertation include a passive vision sensor-an omnidirectional camera and an active vision sensor-an omnidirectional lidar. With a large field of view, this kind of sensor is widely used in environment perception of automatic land platform. Due to its special geometric characteristics, its calibration is always a fundamental question in computer vision field.In this dissertation, we study on the calibration of omnidirectional camera and lidar, mainly focus on three aspects:calibration and self-calibration of omnidirectional cameras, extrinsic calibration of a lidar-camera system. In order to achieve precise omnidirectional camera calibration, we propose a robust calibration method based on a viewing sphere which improves the accuracy of the results. We take advantage of the idea of compressive sensing based low-rank texture recovery to achieve the self-calibration of omnidirectional cameras, and reliable results are achieved. Geometric constraint and motion estimation are adopted to solve the joint calibration of an omnidirectional lidar and a camera.The main contributions are outlined as follows:1. To provide accurate correspondences between image and space information, we propose an omnidirectional camera calibration method via the viewing sphere. The geometric properties of two mutually orthogonal sets of parallel lines on the viewing sphere can provide a closed form solution for estimation of intrinsic and extrinsic parameters. Benefitting from the relative precise estimation of the intrinsic and extrinsic parameters, this method can further reduce the uncertainty of calibration results compared with most of the state-of-the-art methods.2. We propose an omnidirectional camera self-calibration method based on compressive sensing, and the sensor can be quickly calibrated by a simple scenario. The method calibrates the camera by recovering the low-rank texture in the image, and only one image is demanded. Furthermore, we define a projection function for spherical large-field-of-view low-rank texture to meet the imaging characteristic of omnidirectional cameras. Different from most of the self-calibration methods, this method does not rely on low-level features such as edge, corner, and is weakly affected by external factors such as light, shadow etc. More reliable results can be obtained.3. We put forward two methods for a lidar-camera system extrinsic calibration based on natural scenarios. Compared with the stereo camera system, an omnidirectional lidar-camera system is of low computational complexity, high accuracy and less affected by environment when constructing3D scenes. To fuse the data of lidar and camera effectively, we need to calibrate the extrinsic parameters of the lidar-camera system. By defining a reference world coordinates according to a trihedron in the scene, we make use of geometric constraints or matched features of the trihedron to estimate the relative motions between the lidar or camera coordinates and the world coordinates. If the relative motions are known, the extrinsic parameters between the lidar and camera are easy to calculate. This method is flexible and does not need specific calibration objects. Furthermore, it does not largely rely on the input information and only two frames of data are enough to get the reliable results.

  • 【网络出版投稿人】 浙江大学
  • 【网络出版年期】2014年 06期
节点文献中: 

本文链接的文献网络图示:

本文的引文网络