节点文献

计算机取证的安全性及取证推理研究

Research on Security Issues and Forensic Reasoning of Computer Forensics

【作者】 陈龙

【导师】 王国胤;

【作者基本信息】 西南交通大学 , 计算机应用技术, 2009, 博士

【摘要】 计算机取证是解决争议和打击计算机犯罪的重要手段,是实现信息安全保障的一个重要方面,在保持社会稳定和维护法律秩序方面具有重要作用。计算机取证的安全性、可靠性面临特殊的挑战:首先,电子证据的脆弱性导致了证据容易被修改且修改后不易被发现,电子证据在收集过程中和获得之后都面临着证据毁坏、介质错误、特定数据伪造等各种威胁:其次,大量案例涉及的海量数据信息使得对电子证据的固定面临着细粒度的完整性检验需求与Hash数据量大之间的矛盾;同时,反取证威胁使得电子证据获取工具的安全成为新的问题,计算机取证分析结论的可靠性也不断受到质疑。本文在分析计算机取证领域国内外研究现状和存在问题的基础上,以加强计算机取证的安全性、可靠性为目标,研究了细粒度数据完整性检验理论以支持细粒度电子证据固定,从而支持电子证据的真实性、完整性;研究电子证据获取方法的安全性和可靠的形式化取证推理方法。归纳起来,本文的主要研究工作和创新内容表现在以下几个方面:首先,针对计算机取证的细粒度数据完整性检验需求及海量数据导致的Hash大数据量问题,基于组合编码原理提出了细粒度数据完整性检验方法,称为完整性指示编码。完整性指示编码使用监督矩阵表示Hash和数据对象之间的监督关系,通过适当的交叉检验,在保持Hash检验安全性不变的前提下,可用较少的Hash数据实现细粒度数据完整性检验。该方法适用于细粒度电子证据固定。几种传统的完整性检验方案均是完整性指示编码的无交叉检验特例。设计了一种编码收益度量指标作为选择不同编码及进行参数设置的依据。采用细粒度数据完整性检验方案可以对少数错误进行准确和高效的隔离,从而减轻因偶然错误或少量篡改而导致的整体数据失效的灾难性影响。其次,依据细粒度数据完整性检验方法,分别构造了组合单错完整性指示码、超方体单错完整性指示码以及有限域多错完整性指示码等三种编码;采用并发计算和再Hash计算两种方式加速Hash生成过程,提高了细粒度数据完整性检验效率。组合单错完整性指示码在单错条件下可实现对Hash数据的大幅度压缩。超方体单错完整性指示码在单错条件下具有高压缩率、较低错误放大率,并可通过选取任意自然数作为超方体的阶,以高效率的组合方式处理各种不同规模的数据对象。有限域多错完整性指示码能准确指示多个错误,在低出错率条件下具有较高的压缩率、低错误放大率,并可通过灵活设置码参数来满足不同的实际需要。有限域多错完整性指示码具有模块化的Hash结构,对于有限域GF(q)上的d维向量空间,每增加(d-1)组共(d-1)q个Hash即可多指示一个错。超方体单错完整性指示码和有限域多错完整性指示码的Hash具有平行的分组关系,单独一组Hash即可独立指示所有数据的完整性,为Hash数据的多方分离存储提供了条件,增强了细粒度数据完整性检验方法在电子证据固定等应用中的实用性。随后,针对反取证威胁,分析了一种典型的基于数据底层特征的证据识别方法——上下文触发分片Hash算法的脆弱性,提出了带密钥的上下文触发分片Hash’快速算法。通过在上下文触发分片Hash算法及其传统Hash算法中引入可变参数,由不同密钥生成不同的文件指纹,增加了攻击者通过猜测密钥或比较文件指纹来获得密钥或参数组合进而攻击文件指纹的难度。改进算法在多生成一个Hash指纹的情况下和原算法的速度相当或更快,而且可以在更大程度上找到相似的文件。算法性能分析及实验结果表明由不同密钥生成的参数组之间有较好的独立性,且参数组选择空间大,可较好地抵抗伪造、文件分割与合并、特定位置修改等针对性攻击,安全性得到明显提高。最后,针对现有有限状态自动机模型的不足,提出了通用的Mealy型时间有限状态自动机模型及其正向、双向等推理策略。该模型可同时表达系统输入、输出、内部运行状态等多方面的证据及其时间属性,有利于电子证据的形式化表示和案例建模。案例分析和实验结果表明了该通用模型及其推理策略的有效性。

【Abstract】 Computer forensics is an important method for solving civil dispute and fighting against computer crime, and also a way to realize information assurance. It plays a more and more important role in maintaining social stability and law order.The security and reliability of computer forensics face special challenges. First of all, digital evidence is inherently vulnerable, and it is easy to be modified while it is very difficult to discover the modifications. In the process of gathering and preserving digital evidence, there are many kinds of threats, such as evidence destroying, medium error and specific data forging. Secondly, mass data occurred in many cases is a difficult challenge for computer forensics, which is the contradiction between the demand of fine-grained evidence preservation and mass hash data. Furthermore, with the development of anti-forensics technologies, the security of digital evidence acquisition and identification methods and tools becomes a new difficult problem. At the same time, the reliability of analysis conclusion of computer forensics is doubted occasionally.In this dissertation, we summarize the research results of theories and methods about computer forensics at first. Then, in order to improve the security and reliability of computer forensics, the fine-grained data integrity theory is studied to support the fine-grained digital evidence preservation. It is helpful for assuring the fidelity, integrity and security of digital evidence. Secure method of digital evidence acquisition and identification, and the formal forensics reasoning method are also studied. The main contributions of this dissertation are recapitulated as follows:First of all, to satisfy the demand of fine-grained data integrity check in computer forensics and solve the issue of mass hash data, a fine-grained data integrity check method is proposed based on the combinatorial coding theory。It is named as integrity indication coding (ⅡC). Check matrix is used to express the check relationship of hashes and data.ⅡC could accomplish fine-grained data integrity check using less hash data via cross hash checking. It is suitable for fine-grained digital evidence preservation. Traditional integrity check schemes can be taken as the particular cases of IIC without cross hash checking. The measurement of code gain is also designed to guide the choosing of right code and parameters for real application. Fine-grained data integrity check method could mitigate the disastrous effect of some random errors or intentional forging modification. In case of a portion of evidence data or file is corrupted, it could isolate the damage efficiently and accurately, so the intact remainder will be still usable.Thereafter, based on fine-grained data integrity check method, combinatorial one error integrity indication code (CleⅡC), hypercube one error integrity indication code (HleⅡC) and Galois field multi-error integrity indication code (GFIIC) are proposed respectively. Concurrent computing model and rehash computing model are used to accelerate the hash computing process, and improve the efficiency of fine-grained data integrity checking.Combinatorial one error integrity indication code has very high hash compression ratio. Hypercube one error integrity indication code has high compression ratio and low base error amplification ratio. By setting any positive integer as the hypercube’s order, HleⅡC is able to deal with different scale of data objects efficiently.GFIIC can indicate multiple errors accurately with high compression ratio and low error amplification ratio, and it provides a scalable scheme for different applications with several parameters. GFIIC has a modular hash check structure. In a d dimension vector space over GF(q), one more error can be indicated by adding q rows d-1 columns hashes every time. At the same time, all hashes of GFIIC and HleⅡC can be divided into several groups, and each group can indicate the integrity of all data independently. So it has the capability of preserving hash data separately and making fine-grained data integrity check method useful in digital evidence preservation.Next, an improved resilient and quick context triggered piecewise hash algorithm with key is proposed. Context triggered piecewise hashing technique is suitable for indentifying or filtering evidence, which is based on the bit stream characteristic of data. Facing the threat of anti-forensics technology, the vulnerability of context triggered piecewise hashing is analyzed and then an improved resilient and quick algorithm with key, named secure and quick hash checksum (Sksum), is proposed. By using variable parameters in the context triggered piecewise hashing, the algorithm will produce a different file signature for a file with a different key. It will be more difficult for attackers to obtain the key or the parameter combination of a file signature so as to attack the file signature by guessing keys or comparing file signatures. Sksum can generate a file signature with one more hash signature in the same or faster speed compared to the original algorithm. The performance analysis and experiment results show that the different parameter combinations of different keys are independent, and there are a huge amount of choices for parameter combinations. The algorithm can deal with forging, file splitting and merging, specific file position modification attack, and its security performance is improved obviously.Finally, a timed Mealy finite state machine model with multiple reasoning strategies is proposed to overcome the disadvantage of Gladyshev’s finite state machine model. It can express the evidence of system input, output and inner state with time attribute at the same time. It is suitable for the digital evidence formalization and case modeling. Case study and experiment result show that the general model with reasoning strategies is feasible and adaptable.

节点文献中: 

本文链接的文献网络图示:

本文的引文网络