节点文献

大构件焊缝磨抛机器人视觉测量技术的研究

Research on Grinding/Polishing Robotic Vision Measurement Technology for Weld Beads of Large Structural Parts

【作者】 赵军

【导师】 赵继;

【作者基本信息】 吉林大学 , 机械制造及其自动化, 2014, 博士

【摘要】 随着工业和制造业的迅速发展,用于制造高速列车车体、轮船船体、飞机机身的大型结构件的需求日益增加,焊接作为制造领域的一种重要工艺技术,已被广泛应用于大型平面或曲面结构件的成型连接中。焊接之后多余的焊接金属材料需要被去除,磨抛去除焊后焊缝不仅是得到光滑一致的被连接件表面的工艺要求,而且还是减小焊接应力的途径,因此,去除大型结构件焊后焊缝多余的金属材料具有现实意义。目前,大型结构件焊缝多余金属材料的去除依然通过工人手工磨抛完成,技术工人使用千叶片等磨抛工具对大型结构件焊缝进行磨抛,不仅劳动强度大效率低,而且加工精度也难以保障,工人在焊缝磨抛中经常会磨伤母材,更重要的是磨抛过程中产生的粉尘还会危害操作人员的身体健康,操作人员有时还要在高处和狭小的空间进行作业,工作环境恶劣,因此迫切需要大型结构件焊缝磨抛自动化。如果使用机床进行大型结构件焊缝的自动化磨抛,需要机床的尺寸大于大型结构件尺寸,这样的大型设备加工、装配都很困难,加工柔性不足,而且成本昂贵。针对上述问题,本文提出采用小型自主移动机器人对大型结构件焊缝进行磨抛的崭新技术思路,机器人对焊缝进行自主磨抛的前提是对焊缝进行准确识别、空间测量和定位,只有得到焊缝实时的三维空间几何信息和位置信息,才能有效规划磨抛参数,对焊缝进行自动化磨抛和加工余量的检测。针对焊缝空间测量的关键技术问题,本文从磨抛机器人视觉系统设计、焊缝磨抛建模、视觉系统数学建模、焊缝图像处理、焊缝特征提取、特征点定位、磨抛余量检测等方面开展了一系列的创新性研究工作。本文根据大型结构件焊缝磨抛的工况特点以及焊缝磨抛工艺的测量要求,创新性地将双目立体视觉技术、激光结构光辅助测量、P4P机器人定位技术相结合,设计了焊缝磨抛机器人视觉系统。利用激光结构光在焊缝不同截面的投影线形状的变化,反映焊缝空间三维几何信息的变化,通过图像处理的方法准确地提取了结构光中心线的亚像素坐标,依据对中心线亚像素坐标的特征分析,快速准确地计算了焊缝特征点的亚像素坐标。利用双目立体视觉技术,对同一帧图像对的对应特征点进行视差计算,得到焊缝的三维空间几何信息,这样避免了进行图像匹配的大量计算,节省了大量的计算资源和时间。采用同一平面的4个发光二极管作为定位特征点,准确地提取了焊缝在机器人坐标系中的坐标以及焊缝的方位信息。依据焊缝磨抛机器人视觉系统计算的焊缝空间几何信息与位置信息,深入分析和讨论了焊缝磨抛方法,建立了以焊缝空间几何信息为基础的焊缝磨抛模型,计算了磨抛力与焊缝去除体积的量化关系,提出了焊缝磨抛模型,确定了磨抛机器人视觉系统工作的总体流程,以此为依据对主要的视觉系统硬件进行了选型和计算,并计算和验证了视觉系统Z方向上的理论分辨率。建立了焊缝磨抛机器人视觉系统的数学模型并进行求解,在分析和对比各种摄像机标定方法的基础上对摄像机的内参数进行了标定,并提出了摄像机径向畸变和中心畸变的校正方法。详细分析了焊缝结构光图像噪声信号的来源和特点,在此基础上采用自适应中值滤波方法对焊缝图像进行去噪处理,使用灰度插值的方法确定双目校正图像的灰度,对焊缝图像进行了边缘锐化、对比度拉伸以及二值化处理,此后对焊缝结构光图像进行结构光边缘像素搜索,得到了结构光中心的像素坐标。为了提高焊缝图像处理的速度和效率,针对直线焊缝,提出了动态ROI(感兴趣区域)定位算法,将要处理的图像面积缩减到原来图像面积的2%,减小了计算量,提高了计算速度。在动态ROI内,提出了列差分高斯算法,无需图像的前处理便可提取结构光中心的亚像素坐标。对于一般形状的焊缝而言,提出了以光条位置分布和焊缝图像结构为基础的动态ROI检测算法,快速准确地提取了包含焊缝特征点在内的动态ROI,针对结构光亮度沿截面近似高斯分布的特点,采用HESSIAN矩阵为基础的偏微分检测算法和长度阈值去除算法,提取了结构光光心的亚像素坐标并去除了多余的分支线。在提取了焊缝结构光亚像素坐标的基础上,提出了斜率和距离阈值分析算法,准确地提取了焊缝宽度的转折点以及焊缝的最高点信息,采用数值积分的方法,求出了焊缝在激光投影线位置的截面面积。本文提出的机器人视觉系统模型中机器人坐标和摄像机坐标之间的数学关系是通过机器人本体上同一平面内的4个发光二极管进行计算的。特征点在图像中呈现近似椭圆形态,选择和讨论了可应用于该椭圆形目标点中心提取的经典算法,包括:高斯加权灰度重心法、带阈值的灰度重心法、高斯曲面拟合法、抛物面拟合法等,对这些经典方法进行了推导和分析。在此基础上,本文首次提出了SZCM光学特征点中心定位算法,该算法解决了大模板Zernike矩阵计算量庞大、计算效率低下的问题,采用基于判别式值约束的椭圆拟合方法,有效改善了传统最小二乘法进行椭圆拟合时伪边缘点对拟合精度的影响,试验结果显示:本文提出的SZCM算法对理想光斑中心的提取精度较高,在高斯噪声条件下误差小于0.002像素,实物图像处理实验中心坐标的方差仍然可以达到0.004像素,可以较好克服噪声信号以及提取伪边缘信号的影响。在理论分析的基础上,本文对视觉系统的测量和定位方法进行了试验研究,分别通过焊缝高度、宽度和截面面积的测量试验对比了搜索拟合算法、列高斯差分算法、HESSIAN矩阵为基础的亚像素算法的图像处理精度和稳定性。分别在不同的算法下,测量了焊缝同一截面位置的空间几何信息,试验结果表明偏微分结构光中心线亚像素提取算法结合斜率-距离阈值分析法可以获得较高的测量精度,测量精度在0.09mm以内;以偏微分检测算法为例,对视觉系统的重复测量精度进行了试验研究,试验结果表明,视觉系统的重复测量精度在0.04mm以内;进行了机器人焊缝跟踪试验,机器人视觉伺服系统对焊缝跟踪的最大位置误差为0.64mm,完全满足机器人磨抛加工的设计要求;对大构件焊缝进行了磨抛作业试验,对比了本文提出的基于视觉测量信息的焊缝磨抛模型和焊缝恒力磨抛模型的磨抛效果,试验结果表明:恒力磨抛时,焊缝的余高对原始高度有较大的仿形性,使得磨抛后焊缝余高变化较大,无法满足磨抛加工的要求,而采用本研究提出的焊缝磨抛模型和控制方法,焊缝加工余高变化范围在±0.15mm内,完全满足大构件焊缝自动化磨抛的技术要求。论文的理论和试验研究工作表明:本文提出的小型自主移动机器人自动磨抛大型结构件焊缝的思路正确合理,能够显著提高磨抛的精度和加工质量,本文所建立的焊缝磨抛机器人视觉系统稳定、可靠,提出的结构光光心检测算法、椭圆特征点中心亚像素定位算法、焊缝特征点提取算法等图像处理和特征识别方法具有很好的鲁棒性和精度,具有较快的图像处理速度,能够满足对大型结构件焊缝实时测量和定位的要求,为大型结构件焊缝自动化磨抛提供了新的技术方案和解决途径。

【Abstract】 With the great development of industry and manufacture business,the demand forlarge-scale parts used in high speed train,steamer body and aircraft body is increasing. As animportant technical process in manufacturing field, welding has been widely employed in theflat and curved surfaces moulding and connecting of large-scale structures. Unnecessarymetal materials should be removed after weld. Grinding and polishing unnecessary materialscan not only meet the demand for smooth joint part, but also reduce the welding stress. Theprocess of removing is critical.Currently, the process is still manually performed. Skilled workers grand and polishlarge-scale structures by abrasive cloth flap wheel. The whole process needs high labourintensity, but comes inefficiency, no guarantee of working accuracy. The damage of basemetal happens a lot during grinding and polishing, more importantly, prolonged contact withdust during the process can harm workers’ health. In some cases, workers need to operatewithin narrow space or at high altitude, where the environment could be execrable.Automatic grinding and polishing of large-scale structures is in pressing need. Automaticgrinding and polishing by machine tools needs the size of large-scale structure smaller thanmachine tool. The machine tools of required size are usually difficult to assemble or process,with low flexibility, but expensive. Based on difficulties discussed, a new thought ofgrinding and polishing large-scale structures by micro-robot is proposed in the paper. It isonly the geometry and position information of three-dimensional space at actual time can beobtained that the parameters to be used will be programmed to test grinding and polishingand machine allowance. Focusing on critical technical problems discussed above, the papershows the research works including vision system of grinding and polishing robot, parameterprogramming, mathematical modeling of vision system, weld bead image processing, weldbead feature extracting, featured point sub-pixel coordinates extracting,and grindingallowance detection.According to specific structural characters and working conditions of large-scale structuregrinding and polishing, taking requirements of robotic vision system into consideration,vision system of grinding and polishing robot is designed by combining binocular stereovision system, laser-assisted source and P4P robot featured points. The change of pictureshape projected by laser structure light indicates change of3D geometric information ofweld bead space. Sub-pixel coordinates are obtained by image processing, and the featuredpoints’ coordinates of weld bead are extracted. Based on binocular stereo visionsystem, parallax computation is performed in corresponding feature points of the sameimage pair so that the three-dimensional space geometric information is obtained. Thecomputation avoids massive calculation for image match, and saves lots of resource and time.Since four LEDs on the same plane are as feature point,we can obtain correct coordinateinformation of weld featured points and azimuth information of weld bead.After detailed analysis of plan for grinding and polishing parameters based on weld bead information(geometrical information&position information)in vision system, grindingmould for weld bead is built. According to quantitative relationship between grinding forceremoved material in weld bead, a strategy of grinding and polishing control is proposed inthe paper, and the workflow of grinding and polishing robotic vision system is determined.Based on the obtained data, the type of hardware is confirmed and relative calculationperformed, resolving power of vision system at Z direction preliminary calculated andtestified. Mathematical model of polishing and grinding robotic vision system is developedand performed solution. Basing on the traits of different calibration methods, calibrateintrinsic parameter of camera and calibration method of camera is determined.Detailed analysis of noise source and noise characteristic in weld bead structure lightimage is performed, furthermore, noise is removed by adaptive filtering method, andadjusted image is obtained by gray level interpolation,Edge shaping,contrast stretch andbinary image processing are performed in weld image. Search pixel of structure light edge,then the pixel coordinates of structure light bar center is found. In order to improve theaccuracy and speed of weld bead processing, aiming at nearly straight weld bead, a methodof location algorithm for ROI (region of interest) is proposed in the paper. By this method,the size of processing image area is within2%of the original area, the amount of calculationreduced, while the speed improving. Column differential Gauss algorithm specifically forROI is proposed: a new way of extracting coordinates of sub pixel in the center of structurelight without pre-processing. For the weld bead of general shape, a dynamic ROI containingweld bead feature points can be accurately and quickly obtained. In the foundation of the factthat luminance of structure light spreads approximately Gaussian distribution, the paper putsforward Gaussian model and parabolic model for luminance distribution. Sub-pixelcoordinates are extracted by partial differential detection algorithm (on basis of HESSIANmatrix) and by length threshold removal algorithm. Moreover, the excessive branch line isremoved. Basing on extraction of sub pixel coordinates of weld bead structure light bar,analysis algorithm of threshold value of slope and distance is proposed. According to theproposed method, the information of turning point and the highest point of weld bead arecorrectly extracted. The area of section where laser projection bar locates is calculated by themethod of numerical integration.The mathematic relation between robot and camera coordinates proposed in the paper iscalculated by4LEDs of robot body on the same plane. Considering the approximate ellipseshape LED appeared in the image, an extraction algorithm for ellipse sub-pixel center isproposed which applying adaptive threshold segmentation and gray weighted interpolation.According to the image processing tests, the precision of extraction by force algorithm isverified by comparison. Collected data includes all relative parameters necessary for visionsystem. Error compensation is presented on the background of analysis of vision systemmeasurement error and of positioning error.On the basis of theoretical analysis, the paper discusses the method used in vision systemmeasurement and positioning, also makes comparison for image processing precision of Search fitting algorithm,differential algorithm,Steger algorithm based on HESSIAN. Underdifferent algorithms, the spatial geometrical information of the same section in weld bead ismeasured. The result indicates that Steger light center extraction algorithm combiningSlope-distance threshold analysis can yield to good measurement precision of within0.09mm. Weld bead allowance detection algorithm is verified, and the method with highprecision and efficiency. In case of Steger algorithm, tests are performed for precision ofvision system repeatability. The tests yield to precision of repeatability within0.04mm.Grinding and polishing on robot test indicates the reliability and validity of robotic visionsystem. Grinding and polishing programming on basis of the weld bead information and thebuilt grinding and polishing mould leads to the same conclusion.The paper research work indicates that the thought of weld bead grinding and polishingstructure part by micro mobile robot is reasonable, which could improves the precision andprocessing quality of grinding and polishing. The grinding and polishing robot vision systemis with stability and reliability. The image processing methods have high robustness,precision and speed including structure light center detection algorithm, featured point centersub-pixel extraction algorithm and weld bead featured point extraction algorithm. Thesystem meets requirement for real time detection and positioning in weld bead. The researchwork discussed in the paper provides a new technical protocol and resolution for large-scalestructure piece robotic grinding and polishing.

  • 【网络出版投稿人】 吉林大学
  • 【网络出版年期】2014年 09期
节点文献中: 

本文链接的文献网络图示:

本文的引文网络