《電子技術(shù)應(yīng)用》
您所在的位置:首頁(yè) > 通信與網(wǎng)絡(luò) > 設(shè)計(jì)應(yīng)用 > 基于深度學(xué)習(xí)的神經(jīng)歸一化最小和LDPC長(zhǎng)碼譯碼
基于深度學(xué)習(xí)的神經(jīng)歸一化最小和LDPC長(zhǎng)碼譯碼
電子技術(shù)應(yīng)用
賈迪1,嚴(yán)偉1,姚賽杰2,張權(quán)2,劉亞歡2
1.北京大學(xué) 軟件與微電子學(xué)院,北京 102600;2.裕太微電子股份有限公司
摘要: LDPC碼是一種應(yīng)用廣泛的高性能糾錯(cuò)碼,近年來(lái)基于深度學(xué)習(xí)和神經(jīng)網(wǎng)絡(luò)的LDPC譯碼成為研究熱點(diǎn)?;贑CSDS標(biāo)準(zhǔn)的(512,256)LDPC碼,首先研究了傳統(tǒng)的SP、MS、NMS、OMS的譯碼算法,為神經(jīng)網(wǎng)絡(luò)的構(gòu)建奠定基礎(chǔ)。然后研究基于數(shù)據(jù)驅(qū)動(dòng)(DD)的譯碼方法,即采用大量信息及其經(jīng)編碼、調(diào)制、加噪的LDPC碼作為訓(xùn)練數(shù)據(jù)在多層感知層(MLP)神經(jīng)網(wǎng)絡(luò)中進(jìn)行訓(xùn)練。為解決數(shù)據(jù)驅(qū)動(dòng)方法誤碼率高的問(wèn)題,又提出了將NMS算法映射到神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu)的神經(jīng)歸一化最小和(NNMS)譯碼,取得了比NMS更優(yōu)秀的誤碼性能,信道信噪比為3.5 dB時(shí)誤碼率下降85.19%。最后研究了提升NNMS網(wǎng)絡(luò)的SNR泛化能力的改進(jìn)訓(xùn)練方法。
中圖分類(lèi)號(hào):TN911.22 文獻(xiàn)標(biāo)志碼:A DOI: 10.16157/j.issn.0258-7998.245766
中文引用格式: 賈迪,嚴(yán)偉,姚賽杰,等. 基于深度學(xué)習(xí)的神經(jīng)歸一化最小和LDPC長(zhǎng)碼譯碼[J]. 電子技術(shù)應(yīng)用,2024,50(12):7-12.
英文引用格式: Jia Di,Yan Wei,Yao Saijie,et al. LDPC long code decoding with neural normalized min-sum based on deep learning[J]. Application of Electronic Technique,2024,50(12):7-12.
LDPC long code decoding with neural normalized min-sum based on deep learning
Jia Di1,Yan Wei1,Yao Saijie2,Zhang Quan2,Liu Yahuan2
1.School of Software and Microelectronics, Peking University; 2.Motorcomm Co., Ltd.
Abstract: LDPC code is a widely-used high-performance error correction code. In recent years, LDPC decoding based on deep learning and neural networks becomes a research hotspot. Based on the (512,256) LDPC code of the CCSDS standard, this paper firstly studies the traditional decoding algorithms of SP, MS, NMS, and OMS, laying a foundation for the construction of neural networks. Then a data-driven (DD) decoding method is studied which adopts the information with its encoded, modulated and noise-added LDPC code as the training data within a Multi-layer Perceptron (MLP) neural network. In order to solve the problem of high bit error rate (BER) in data-driven method, the Neural Normalized Min-sum (NNMS) decoding in which the NMS algorithm is mapped to the neural network structure is proposed, achieving more excellent BER performance than that of NMS. The BER declines by 85.19% when channel SNR equals to 3.5 dB. Finally, improved training methods to enhance the SNR generalization ability of the NNMS network is studied.
Key words : LDPC;deep learning;neural networks

引言

低密度奇偶校驗(yàn)碼(Low Density Parity Check, LDPC)是一種性能逼近香農(nóng)極限[1]和具有高譯碼吞吐量[2]的前向糾錯(cuò)碼,被廣泛運(yùn)用于有線、無(wú)線、衛(wèi)星、以太網(wǎng)等通信系統(tǒng)中。編碼后的LDPC碼被附加上冗余信息,經(jīng)調(diào)制和噪聲信道后再進(jìn)行譯碼,力求盡可能地糾正其中的誤碼。傳統(tǒng)的LDPC譯碼算法包括和積譯碼(Sum Product, SP)、最小和譯碼(Min-sum, MS)、基于MS的改進(jìn)算法有歸一化最小和譯碼(Normalized Min-sum, NMS)與帶偏置項(xiàng)的最小和譯碼(Offset Min-sum, OMS)。隨著近年來(lái)人工智能的快速發(fā)展,基于神經(jīng)網(wǎng)絡(luò)深度學(xué)習(xí)越來(lái)越廣泛地應(yīng)用于各領(lǐng)域的研究,將深度學(xué)習(xí)方法應(yīng)用于信道譯碼的研究也成為了一大研究熱點(diǎn)。Nachmanid等已證明對(duì)Tanner圖的邊緣分配權(quán)重,可相比傳統(tǒng)置信傳播(Belief Propogation, BP)算法減少迭代次數(shù),提高譯碼性能[3]。Wang等人提出的DNN譯碼數(shù)學(xué)復(fù)雜度高,僅適用于短碼,在長(zhǎng)碼譯碼中展現(xiàn)性能不佳[4]。Lugosch 等提出可用于硬件實(shí)現(xiàn)的神經(jīng)偏置項(xiàng)最小和譯碼(Neural Offset Min-sum, NOMS)[5], 但該方法也難以應(yīng)用于長(zhǎng)碼譯碼。本文研究基于深度學(xué)習(xí)的LDPC長(zhǎng)碼譯碼方法。首先研究數(shù)據(jù)驅(qū)動(dòng)譯碼算法,即預(yù)先設(shè)置適當(dāng)結(jié)構(gòu)的MLP網(wǎng)絡(luò),然后直接采用大量信息與編碼數(shù)據(jù)進(jìn)行訓(xùn)練,從而構(gòu)建譯碼神經(jīng)網(wǎng)絡(luò)。由于沒(méi)有將傳統(tǒng)算法的迭代結(jié)構(gòu)融入其中,此方法的譯碼效果不理想。而后提出神經(jīng)歸一化最小和譯碼(Neural Normalized Min-sum, NNMS),它將傳統(tǒng)的NMS算法的迭代結(jié)構(gòu)改造為神經(jīng)網(wǎng)絡(luò),再對(duì)神經(jīng)網(wǎng)絡(luò)的參數(shù)進(jìn)行訓(xùn)練。NNMS將傳統(tǒng)NMS算法與神經(jīng)網(wǎng)絡(luò)相結(jié)合,相比于MS和NMS算法均得到了性能的提升。


本文詳細(xì)內(nèi)容請(qǐng)下載:

http://theprogrammingfactory.com/resource/share/2000006241


作者信息:

賈迪1,嚴(yán)偉1,姚賽杰2,張權(quán)2,劉亞歡2

(1.北京大學(xué) 軟件與微電子學(xué)院,北京 102600;

2.裕太微電子股份有限公司,上海 201210)


Magazine.Subscription.jpg

此內(nèi)容為AET網(wǎng)站原創(chuàng),未經(jīng)授權(quán)禁止轉(zhuǎn)載。