节点文献

基于双目视觉的弧焊机器人焊缝三维信息获取研究

Three-dimensional Information Computing of Weld Seam Based on Binocular Vision for Arc Welding Robot

【作者】 陈希章

【导师】 陈善本;

【作者基本信息】 上海交通大学 , 材料加工工程, 2007, 博士

【摘要】 现有的焊接机器人系统从整体上看基本都属于第一代的示教再现型或部分第二代的离线编程型,这两类工业生产中服役的机器人都不具有适应焊接环境和作业条件变化的能力。使焊接机器人能够根据一定的传感信息而具有自动调节、自主规划的功能,对实现焊接机器人的自主焊接具有十分重要的现实意义。为了增强弧焊机器人的自主能力和自适应能力,提高机器人智能化水平,本文模拟焊工在焊接过程对环境的观察和适应能力,将两个CCD摄像机安装在机器人末端,在较大范围内观察焊接环境,利用双目视觉原理实现初始焊接位置的识别和导引,获取焊缝的三维空间信息,为进一步实现焊接机器人自主焊接奠定了基础。基于视觉传感的焊缝初始位置定位和三维信息获取是一个视觉三维重建的过程。双目立体视觉是三维重建的重要方法之一,本文将两个CCD成一定角度配置,安装在机器人末端,使得焊接环境处于双目的有效视场内,获取宏观环境下的图像,在较大范围内自主寻找初始焊接位置和计算整个焊缝在机器人坐标系下的三维坐标,控制机器人运动到初始焊接位置并能在焊前沿焊缝路径运动。图像的识别是视觉定位的基础,针对焊接环境本身的特点,本文提出了一套宏观环境下初始焊接位置和焊缝识别算法。该算法首先进行焊缝的识别,然后利用识别后的图像对初始焊位进行图像定位。焊缝的识别包括反光区域的去除、图像增强、边缘检测及后续处理等步骤,本文提出的自适应分区模糊增强算法(简称ARFIE),根据归一化的相对图像模糊对比度对图像进行自适应分区,根据分区级别信息进行模糊增强,可对不同对比度的图像获得良好的增强效果。焊缝的整体识别算法可很好的识别不同材料和不同焊接环境下的工件图像。对初始焊位的图像定位,本文提出以焊缝与工件边缘交点做初始值,局部范围内角点提取定位的算法,可精确的识别初始焊位。标定是从二维图像信息到三维空间信息的桥梁,本文对建立双目视觉系统进行了标定,包括双目各自的内外参数标定,相互位置关系的标定和手眼关系的标定,系统配置一旦确定后,摄像机的内参数就不再变化,而手眼关系可能在作业中发生变化,本文直接利用工件图像的信息进行了手眼关系的标定。立体匹配是立体视觉中的关键和难题之一。本文提出一种一般性配置摄像机校正不变量优化校正算法,将任意配置的双目系统校正到理想的平行配置情况。该算法对理论投影区域进行估计,利用该区域内的信息设计了优化校正关系。试验表明,该算法能减小校正图像的变形和图像信息的丢失,提高图像分辨率,大大提高了图像对校正的质量。针对校正后的立体图像对提出一种由粗到细的多信息匹配算法(简称CTFMIMM),该算法利用初始焊接位置等焊接环境中的特殊点给出匹配搜索范围,确定匹配候选集合,然后利用边缘特征信息的强度和方向信息作为约束条件并充分利用原图像的灰度信息,根据提出的灰度相似性参数DOGs最后确定唯一正确的匹配点。该算法是一种基于结构化的边缘特征信息、兴趣点和灰度相关的协同匹配方法。对焊缝信息进行了三维重建,并将其转换到机器人坐标系下,便于控制机器人的运动。分析了机器人误差对三维视觉计算的影响――包括机器人重复定位精度和TCP控制点的影响。试验表明,机器人重复定位精度对视觉计算的影响标准误差不大于0.3mm,当TCP标定最大误差超过1mm后需要进行重新标定。详细分析视觉系统配置对视觉计算的影响,试验测试进一步证明了分析推导结果。在介绍试验系统的基础上给出了根据本文方法进行的几种典型焊缝初始焊位导引与焊缝三维信息获取的试验。试验表明,不含机器人运动误差的纯视觉计算误差小于0.56mm。对空间焊缝初始焊接位置的导引,x、y、z方向上的最大误差小于1.1mm。对于整条焊缝三维信息的计算,平面焊缝的最大平面距离误差和高度误差分别是1.2mm,1.3mm,空间焊缝的最大平面距离误差和高度误差分别是1.2mm,1.6mm;采用模块化编程方法,将与硬件相关的程序及三维视觉计算程序分离,能方便实现不同机器人之间和视觉设备更换后的算法移植。初始焊接位置的自主定位导引和焊缝三维信息的获取是实现智能化焊接的技术基础,对环境具有较强的适应能力,能替代当前的示教在线和基于CAD的离线编程方法,对重要工件的焊接和危险环境下的焊接具有尤为重要的意义。

【Abstract】 Generally, the current welding robot systems almost belong to the 1st generation’s teach and play back and few belong to the 2nd generation’s off-line programming welding robot. These two types of robots cannot adapt to the changes of environment and working conditions. It is very important for welding robot to have the ability to adjust itself and autonomous plan according to some sensor information, which is important in practical production to realize autonomous welding. In order to improve the intelligent level and reliability of robots, this dissertation fixed two CCD cameras on the end-effecter of robot, which simulated the function of welder’s two eyes, to observe the welding environment in a large extent. We realized the autonomous recognition and positioning of the initial welding position (IWP) and guiding the robot to IWP. We also realized the acquisition of three-dimensional (3D) coordinates of spatial weld seam using the principal of binocular vision. The research is the foundation to realize autonomous robot welding.Positioning of IWP and acquisition of 3D seam information is a procedure of visual 3D reconstruction. Binocular vision is an important method in this field. We placed two CCD cameras in a certain angel and fixed them on the end-effecter of the robot. The welding environment should be in the two cameras’common field of view, thus we can capture the work-pieces images in large welding environment. The aim of this thesis is finding and positioning the IWP in a relative large extent. And the 3D coordinates of welding seam in robot coordinates system is also calculated. Using the information, the robot is guided to the initial welding position and controlled to move along the seam path.Image recognition is the first step in 3D reconstruction. This thesis proposed an algorithm to recognize the IWP and the whole seam in relative larger welding environment. The algorithms first recognized the whole seam then IWP. The recognition of the whole seam including the procedure of pre-processing, such as the removing of effect of light, filter et al., image enhancement, edge detection and post-processing. This thesis proposed an algorithm named ARFIE (Adaptive Regional Fuzzy Image Enhancement), it defines the parameters named normalized relative fuzzy contrast as standard to part the image to different regions. The image is enhanced according to the level of region in fuzzy space. This method can enhance image of different contrast effectively. The whole procedure recognizes well the seam of work-pieces for different material and environment. The thesis also proposed a method to recognize IWP, it took the intersection point of seam and work-pieces boundaries as the initial value, and detected the corner in a window which take the initial value as the center. The method utilized both the edge and grey information, which assure the accuracy of recognition of IWP.Calibration is a bridge from 2D image to 3D spatial information. The thesis calibrated the binocular vision system, including the calibration of two cameras’intrinsic parameters, external parameters, relative relationship of the two cameras and hand-eye relationship. If the cameras were selected and placed, their intrinsic parameters are unchangeable, but the hand-eye relationship may change in the working because the collision, we adopt the online hand-eye relationship calibration algorithms.Stereo matching is a key and difficult problem in stereo vision. The thesis proposed an invariable transformation optimized image rectifying algorithms, which can rectify the image in general placement to ideal parallel placement. The algorithms calculated theory projection area of rectified images, proposed rectified relationship using information in this area. Experiments show that the algorithms reduce or avoid image distortion and the loss of image information, increase image resolution. The images have higher quality after the optimized rectification. An algorithm named CTFMIMM ( Coarse to Fine Multi-Information Matching Method)is proposed to solve the corresponding problem according to the characters of welding environment. The searching range is defined according to the special point such as IWP in welding image, and the structured edge information of processed image and grey information of original image are considered in the matching searching process, which ensured the rapid and accurate matching.The 3D information were reconstructed and transformed to the robot coordinates system. We can control the robot moving along the coordinates’data. This thesis analyzed the effect of robot error to 3D information’s calculating, including the effect of TCP calibration and repeat positioning precision. Experiments showed that the TCP should re-calibration when its error is more than 1mm. The effect of placement of the vision system is also analyzed and the corresponding experimental results are given.The experimental system is introduced and the experimental results are given at last. Some types of typical planar and spatial seam are selected for the experiments. Experiments show the visual computing error is less than 0.56mm in camera coordinates system, which don’t including the effect of robot moving. For guiding of IWP, the error is less than 1.1mm when the experimental conditions are satisfied. For the acquisition of 3D information, the planar distance error and height error is less than 1.2mm, 1.3mm for the planar seam, respectively. And the planar distance error and height error is less than 1.2mm, 1.6mm for the spatial seam. The program is organized in blocking model, the program related to hardware and 3D visual computing are in different model, which is convenient for the application of the algorithms in different robots and situation.The research is the technology foundation to realize autonomous welding. It has the ability to adapt to the changes of environment, which is a good method to replace work type of teach and playback, offline programming based on CAD, it is very important especially for the danger welding environments.

  • 【分类号】TG43
  • 【被引频次】7
  • 【下载频次】1090
节点文献中: 

本文链接的文献网络图示:

本文的引文网络