节点文献

鲁棒数据校正理论与应用研究

Dissertation Template for Doctoral Degree of Engineering in Shanghai Jiao Tong University

【作者】 高倩

【导师】 邵惠鹤;

【作者基本信息】 上海交通大学 , 控制理论与控制工程, 2007, 博士

【摘要】 工业过程数据的准确性和可靠性是操作分析与改进,过程控制与优化以及工厂管理的基础,但是,由于仪表失灵,测量偏差和装置泄漏等原因,工业测量数据中不可避免地存在各种误差,数据校正的任务就是利用各种过程冗余信息对测量数据中的误差进行处理,使其满足过程内在的物料平衡、能量平衡以及其它关系式。目前,数据校正技术的研究主要包括三个方面的内容。第一方面的研究是将显著误差的侦破与数据协调分开进行,先采用假设统计检验的方法识别并剔除显著误差,再以最小二乘为目标函数进行数据校正。第二方面的研究主要是将数据协调技术与过失误差侦破同步进行,研究既能抵抗显著误差(污染)又具有较好估计效率的估计方法,从理论上把传统的不含显著误差的基本假设扩展到顾及显著误差存在的理论体系,即把理论体系直接建立在污染误差模型上。第三方面的研究是寻找符合实际情况,能准确反映测量值实际分布的分布模式,并以此分布为目标函数进行数据校正。以上第二、三方面的研究实质是将鲁棒估计(Robust Estimation)理论引入到了数据校正技术中,寻求一种在偏离理想误差分布模式情况下仍旧可靠的参数估计方式,本文将这类方法统称为鲁棒数据校正。鲁棒估计方法已广泛用于数理统计、测量测绘和信号处理等领域,然而在数据校正中的研究与应用却很少,本论文试图在这方面做一些尝试,针对鲁棒数据校正的各项关键技术,从变量的敏感性和冗余度分析,低冗余数据的鲁棒数据校正以及鲁棒校正技术在线性约束、双线性约束、非线性约束、以及动态数据校正问题中的应用等方面对鲁棒数据校正方法进行了较全面深入的研究,提出了用于低冗余数据系统的改进鲁棒校正方法,基于影响函数矩阵的鲁棒非线性数据校正方法,鲁棒自适应数据校正方法,以及基于连续误差检验的鲁棒非线性动态数据校正方法,并结合上海焦化厂碳一工序实际生产过程,对非线性鲁棒数据校正方法的应用进行了讨论。本论文主要取得的成果包括以下几个方面:1.定性定量地分析了基于污染分布模型的鲁棒数据校正方法,其校正精度和显著误差统计量对变量测量值和变量冗余度的敏感程度,指出采用鲁棒数据校正方法,权值因子的引入降低了估计结果对测量误差的敏感程度,增大了变量的局部冗余度,提高了数据的校正精度,同时也增强了显著误差的可识别性。2.针对传统鲁棒最小二乘数据校正方法难以适用于低冗余系统以及迭代计算中可能出现显著误差“误判”以及估计结果可能出现负值的不足,提出了一种适用于低冗余系统的改进鲁棒最小二乘数据校正算法,该算法将变量的冗余度信息纳入显著误差检验统计量的同时,对Huber估计的权函数进行改进,增加淘汰区,将每次迭代中误差统计量最大的可疑数据进行淘汰,以消除显著误差的影响,同时降低了“误判”的可能。为了避免校正值出现负数的可能,该算法还考虑了变量的约束上下限问题,为参数的准确估计奠定了基础。3.针对过程数据被显著误差污染,同时线性化方程又呈病态的情况,提出采用广义极大似然正则化函数作为数据协调目标的鲁棒正则化数据校正方法,该方法与直接在协调目标函数中引入影响函数矩阵的鲁棒正则化方法实质是一样的,从而为复杂非线性过程的鲁棒数据校正提供了依据,即直接在非线性协调目标函数中引入影响函数矩阵,并通过非线性规划方法进行求解,既避免了非线性约束Taylor展开带来的误差和线性化方程求解的不适定问题,同时又具有较好的鲁棒性。4.针对污染正态分布数据校正方法的不足,提出了鲁棒自适应误差分布模型,该模型具有与标准正态分布相似的密度函数形式,不同之处在于采用鲁棒自适应可变权重因子调节误差方差,通过放大显著误差方差,降低其对参数估计的影响。将该模型用于双线性约束过程的数据校正问题,推导出鲁棒自适应最小二乘分析解,同时将鲁棒自适应数据校正方法进行扩展,使之适用于测量相关问题,具有更好的实用性。5.针对传统非线性动态数据校正方法以测量噪声正态分布为假设前提以及可能出现估计滞后的不足,提出了基于误差连续检验的鲁棒非线性动态数据校正方法。该方法将鲁棒估计函数通过置信度函数矩阵的形式引入校正目标,对离群显著误差赋予较小权值,以减小其对参数估计的影响。为了尽可能利用过程的有效信息,该算法提出了置信度矩阵的两步构权方法。同时,考虑到设定值的改变也会引起Mahalanobis距离产生较大变化,导致“误判”,提出了离群误差的连续检验算法,以区分设定值的改变和离群值的发生,对于检测到的设定值改变,重新获取历史移动时间窗数据进行校正,避免了估计滞后的现象,同时减少了对设定值改变的“误判”。6.针对化工数据协调问题中常见的带分流节点的双线性数据协调问题,提出了一种稳定性高、通用性强的数据分类算法。该算法通过对系统约束方程组进行线性化处理,给出雅可比矩阵的分析表达式,并将系统的自由度分析与投影矩阵、QR分解等算法结合起来进行数据的冗余性和可观性分析,不仅适用于含有强度约束条件的双线性数据分类问题,同时也可以处理用劈分因子表示的三线性数据分类问题,另外,该算法同样适用于含有反应节点的过程网络以及其它多线性数据分类问题。7.针对上海某焦化公司碳一车间MES系统中的数据校正模块,将鲁棒非线性数据校正方法用于该多组分双线性过程的数据校正中,给出了实用的数据校正算法以及数据校正结果。最后,总结全论文的工作,并对未来鲁棒数据校正技术的进一步研究与发展进行了展望。

【Abstract】 Reliable process data is the foundation of process monitoring, control performance evaluation, process control, optimization and statistical quality control. However, due to various sources such as measurement irreproducibility, instrument degradation and malfunction, human error, process-related errors, and other unmeasured errors, measurements can be contaminated with errors (random and/or gross). Rational use of the large volume of data generated by chemical plants requires the application of suitable techniques to improve their accuracy. Data reconciliation (DR) is a procedure of optimally adjusting measured data so that the adjusted values obey the conservation laws and other constraints. So far, the researches on data rectification include conventional DR, simultaneous data reconciliation and gross error detection, and DR based error possible density function (PDF) estimation. Conventional DR is based on the assumption of normal distribution, firstly detect and eliminate gross errors and then proceed DR. Simultaneous strategy assume measurement errors have normal-like distribution functions with heavy tails or combining two distributions to account for the contamination caused by the outliers (gross errors) and the generalized maximum likelihood objective functions are taken as the reconciliation objects. DR based error PDF estimation need to estimate the real error distribution and precede the DR with the real PDF.In this dissertation, a further study and research on robust data reconciliation was carried out, putting more attention on the sensitivity ananlyses and redundancy calculation, improved robust least square algorithms in linear, nonlinear and dynamic systems. This dissertation studies the problems of robust data reconciliation systematically, and makes progresses in the following aspects:1. The quantitative and qualitative contribution of diferent measurement variables to the estimation precision, variable local redundency and the gross eror detection in robust data rectification, based contaminated normal distribution, was investigated. Based on this information, decisions can be taken: addition of weight factor to suspicious measurements result in reduction of sensitivity of estimations to gross error, unnecessary measurements, enhancement of variable local redundancy and gross error detectability and significant improvement in the quality of the process validation. 2. Conventional robust least square data rectification iterative algorithm is subjected to the disability of detecting gross error in weak redundancy variable and occurrence of false-detection. An improved iterative algorithm is proposed to deal with above problems by taking local redundancy and inequation constraints into account. During iteration, elimination of suspicious measurement with the largest statistical or the measurement with second largest statistical when the estimation violate bounds, the method results in a significant improvement in the estimation and gross error detection.3. By solving the generalized maximum likelihood DR problem with regularization objective function can improve effectively the singularity of iterative solution of nonlinear data reconciliation. This is also obtained by addition a influence function matrix to regularized objective function, which leads to a more pratical iterative robust nonlinear data reconciliation algorithm, using SQP to solve the nonlinear optimization during iterations. Simulation results validate the efficiency of the proposed approachs.4. Contaminated Gaussian distribution based method is robust for data rectification for its ability of taking probability distributions of random error and gross error into account simultaneously. But its application is limited because the precision of estimation depends on the selection of priori model parameters, which is difficult to obtain in practice. To avoid providing these parameters, a robust adaptive data rectification approach is proposed in this paper. First, a robust adaptive probability distribution model of errors is constructed. Then, Lagrange method is used to obtain the iterative algebraic solution. Application to bilinear constraints process validates the efficiency of proposed robust adaptive data rectification method.5. A novel robust method is proposed by introducing a trust function matrix in conventional least-square objective function of nonlinear dynamic data reconciliation. To avoid the loss of data information, element-wise Mahalanobis distance is proposed, as an improvement on vector-wise distance, to construct a penalty function matrix, which is called two-step-method. Howerer, above methods, as well as NDDR, show a significant delay for the input estimation when a step occurs, which also causes some delay in the estimation of output variables. So a continuous error test was included as a part of the GED logic to detect set-point change. When the set-point change is sensed, a new historic data window is obtained to tracks the new set-point. The correlation of measurement error is also considered in this article.6. A general and robust classification algorithm is given to analyze the redundancy and observability of process variables for large-scale bilinear balances with splitters found in industrial processes when data reconciliation is applied. In the algorithm, the degree of freedom is first used to examine redundancy and observability and then a coefficient matrix of linearized constraints from the Taylor’s expansion at the initial estimates is used to further examine the observability for entire process based on Crowe’s method and QR fractorization. This classification method is not only suitable for the bilinear process with intensive constraints but also the trilinear constraints with multi-split-fractions.7. For the problem of the quasi-static, multi-components system with bilinear constraints in Menthol Joint-production Plant of Shanghai Coke Production Factory, the practical robust data rectification algorithm flowsheet was introduced and the result of data rectification was analyzed. The dissertation is also concluded with a summary and prospect of future robust data reconciliation researches.

节点文献中: 

本文链接的文献网络图示:

本文的引文网络