节点文献

基于低密度生成矩阵编码的迭代量化算法研究

Iterative Quantization Based on Low-density Generator Matrix Codes

【作者】 汪晴川

【导师】 何晨;

【作者基本信息】 上海交通大学 , 通信与信息系统, 2014, 博士

【摘要】 均方误差量化等量化问题既是有失真信源编码问题之关键,也在信道编码问题中起到了成形发射信号、使之分布接近最优的作用,对于信道容量的趋近至关重要。作为信道编码中常用的低密度校验(low-density parity-check, LDPC)码的对偶码,低密度生成矩阵(low-density generation matrix, LDGM)码是适用于此类量化问题的一种重要编码手段,但相应的基于信度传播的迭代量化算法需要引入硬判决步骤方可收敛,此时传统的基于密度演化的分析方法不再直接适用,算法的分析和LDGM度数分布的优化问题在文献中一直未能得到解决,而它们对于趋于理想量化性能的获得又是必需的。本文针对均方误差量化问题,以及更一般的有限交换群上的对称信源编码问题,完成了LDGM迭代量化算法的分析和度数分布等参数的优化。鉴于迭代量化所必需的硬判决步骤难以纳入密度演化方法的框架中进行分析,本文将各个硬判决步骤分开,将量化性能的分析转化为硬判决所用信度传播外信息准确性的分析,并通过引入扰码等技巧,证明了信道传播算法中的消息和外信息具有对称密度,且满足一定劣化关系,据此为外信息的误差定出了上界,并给出了该误差界随码长和迭代次数的增加而趋于零的充分和必要条件,这些条件的成立与否均可根据LDGM度数分布的密度演化结果来判断。以上述条件为度数分布的优化准则,文中先在擦除近似条件下使用线性规划方法进行初步优化,再根据密度演化结果对擦除近似的误差加以迭代校正,从而较准确地获得了迭代次数趋于无穷时的最优度数分布。这些工作首先在二进制情况下完成,然后也推广到了二进制LDGM码通过调制映射产生2K进制码本的情况。考虑到实际量化时的迭代次数是有限的,信度传播外信息仍会存在一定误差,本文又从较简单的二进制擦除量化情况出发,分析了这一误差所导致的错误硬判决对此后迭代的影响,并在该分析结果的指引下,提出了一种除二进制擦除量化外也适用于二进制及2K进制均方误差量化的纠正算法,通过对信度传播算法所用先验信息的调整减小了此前错误硬判决的影响,使得量化误差明显降低。同时,根据上述分析结果,文中也给出了量化误差与迭代次数、每次迭代后的硬判决比特数、码率等参数间关系的一个经验公式,据此在给定的迭代次数下,对这些参数和度数分布作出了进一步的优化。仿真结果表明,本文提出的迭代量化算法在均方误差量化问题中确能取得接近理想的量化误差性能,量化误差与理论界的差距最低仅0.012dB,这是包括网格编码量化和极化码在内的现有方法在同等码长和运算复杂度下所无法达到的。最后,我们将LDGM码本推广为嵌套LDGM-LDPC码本,并以二进制情况为例研究了它在高斯脏纸编码中的应用,解决了此时的量化算法设计和度数分布优化等问题,并通过仿真验证了该方法确实可依靠良好的成形效果而取得较接近信道容量的传输速率。

【Abstract】 Quantization, in particular mean-square error (MSE) quantization, is not only essen-tial in lossy source coding problems, but can also assume in channel coding problems thekey role of shaping the transmitted signal to have a near-optimal distribution, which isnecessary for approaching capacity. Being dual to the low-density parity-check (LDPC)codes commonly used in channel coding, low-density generation matrix (LDGM) codes arewell-suited to such quantization problems, but its iterative quantization algorithm basedon belief propagation (BP) can only converge after the introduction of extra decimationsteps, preventing direct application of conventional density evolution (DE) methods foranalysis. Proper analysis of this iterative quantization algorithm and optimization of theLDGM degree distribution are essential to achieving near-ideal quantization performance,yet they remain unsolved so far.In this thesis, the use of LDGM-based iteration quantization in MSE quantization,and more generally, symmetric source coding problems over a fnite abelian group is an-alyzed, and parameters such as the degree distribution are optimized accordingly. AsDE methods cannot accommodate the necessary decimation steps, the proposed analysismethod instead considers each decimation step separately, and shows the dependence ofquantization performance on the accuracy of the BP-derived extrinsic information used bydecimation step. Through the use of scrambling codes and other techniques, it is provedthat the BP messages and the extrinsic information have symmetric densities and satisfydegradation relationships; in this way, the error in the extrinsic information can be bound-ed, and sufcient or necessary conditions for the error to vanish when the block lengthand the iteration count go to infnity have been derived, all in terms of DE results of thedegree distribution. Using these conditions as the optimization criteria, degree distribu-tion optimization is carried out by frst applying linear programming to obtain an initialoptimization result under erasure approximation (EA), then iteratively using DE resultsto correct EA errors, ultimately allowing the optimal degree distribution under an inf-nite iteration count to be obtained with high accuracy. The above work is frst carriedout for binary codebooks, and then extended to2K-ary codebooks formed by applying a modulation mapping on binary LDGM codewords.In practice, the number of iterations is necessarily fnite, so some error will alwaysremain in the BP extrinsic information, leading to decimation errors that can negativelyafect subsequent iterations. This efect is analyzed by generalizing the observations frombinary erasure quantization (BEQ), and a recovery algorithm for BEQ as well as binaryand non-binary MSE quantization has been proposed that reduces the efect of decimationerrors on subsequent iterations by adjusting the BP priors, thus achieving signifcantlylower quantization errors. In addition, such analysis also enables an empirical formulato be found relating the quantization error and the number of iterations, the pace ofdecimation, as well as the code rate, and further optimization of these parameters as wellas the degree distribution are then cariied out. Simulation results show that the proposediterative quantization algorithm can indeed achieve near-ideal quantization error in MSEquantization, with the gap to the theoretical limit reduced to as low as0.012dB, which isfar beyond the performance achievable with other methods, including TCQ or polar codes,at a comparable block length and computational complexity.The LDGM codebook is fnally generalized into a nested LDGM-LDPC codebook, andits use in Gaussian dirty-paper coding has been investigated using the binary case as anexample, mostly regarding the design of the quantization algorithm and the optimization ofthe degree distribution. Simulation results verify that such a codebook can indeed achievetransmission rates fairly close to capacity, thanks to the near-ideal shaping performance.

节点文献中: