Achieving 1/2 log (1+SNR) on the AWGN channel with lattice encoding and decoding
We address an open question, regarding whether a lattice code with lattice decoding (as opposed to maximum-likelihood (ML) decoding) can achieve the additive white Gaussian noise (AWGN) channel capacity. We first demonstrate how minimum mean-square error (MMSE) scaling along with dithering (lattice...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on information theory 2004-10, Vol.50 (10), p.2293-2314 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We address an open question, regarding whether a lattice code with lattice decoding (as opposed to maximum-likelihood (ML) decoding) can achieve the additive white Gaussian noise (AWGN) channel capacity. We first demonstrate how minimum mean-square error (MMSE) scaling along with dithering (lattice randomization) techniques can transform the power-constrained AWGN channel into a modulo-lattice additive noise channel, whose effective noise is reduced by a factor of /spl radic/(1+SNR/SNR). For the resulting channel, a uniform input maximizes mutual information, which in the limit of large lattice dimension becomes 1/2 log (1+SNR), i.e., the full capacity of the original power constrained AWGN channel. We then show that capacity may also be achieved using nested lattice codes, the coarse lattice serving for shaping via the modulo-lattice transformation, the fine lattice for channel coding. We show that such pairs exist for any desired nesting ratio, i.e., for any signal-to-noise ratio (SNR). Furthermore, for the modulo-lattice additive noise channel lattice decoding is optimal. Finally, we show that the error exponent of the proposed scheme is lower bounded by the Poltyrev exponent. |
---|---|
ISSN: | 0018-9448 1557-9654 |
DOI: | 10.1109/TIT.2004.834787 |