Essentials of Error-Control Coding

Jorge Castiñeira Moreira, Patrick Guy Farrell




Rapid advances in electronic and optical technology have enabled the implementation of powerful error-control codes, which are now used in almost the entire range of information systems with close to optimal performance. These codes and decoding methods are required for the detection and correction of the errors and erasures which inevitably occur in digital information during transmission, storage and processing because of noise, interference and other imperfections.

Error-control coding is a complex, novel and unfamiliar area, not yet widely understood and appreciated. This book sets out to provide a clear description of the essentials of the subject, with comprehensive and up-to-date coverage of the most useful codes and their decoding algorithms. A practical engineering and information technology emphasis, as well as relevant background material and fundamental theoretical aspects, provides an in-depth guide to the essentials of Error-Control Coding.

  • Provides extensive and detailed coverage of Block, Cyclic, BCH, Reed-Solomon, Convolutional, Turbo, and Low Density Parity Check (LDPC) codes, together with relevant aspects of Information Theory
  • EXIT chart performance analysis for iteratively decoded error-control techniques
  • Heavily illustrated with tables, diagrams, graphs, worked examples, and exercises
  • Invaluable companion website features slides of figures, algorithm software, updates and solutions to problems     

Offering a complete overview of Error Control Coding, this book is an indispensable resource for students, engineers and researchers in the areas of telecommunications engineering, communication networks, electronic engineering, computer science, information systems and technology, digital signal processing and applied mathematics.


Table of Contents

Preface & Acknowledgements.

Symbols & Abbreviations.


1.1 Information.

1.1.1 Information measure.

1.2 Entropy and Information Rate.

1.3 Extended Discrete Memoryless Source.

1.4 Channels and Mutual Information.

1.4.1 Information transmission over discrete channels.

1.4.2 Information channels.

1.5 Channel Probability Relationships.

1.6 The a priori and a posteriori Entropies.

1.7 Mutual Information.

1.7.1 Mutual information: Definition.

1.7.2 Mutual information: Properties.

1.8 Capacity of a Discrete Channel.

1.9 Shannon’s Theorems.

1.9.1 Source coding theorem.

1.9.2 Channel capacity and coding.

1.9.3 Channel coding theorem.

1.10 Signal Spaces and the Channel Coding Theorem.

1.10.1 Capacity of the Gaussian Channel.

1.11 Error Control Coding.

1.12 Limits to Communication and their Consequences.

Bibliography and References.



2.1 Error Control Coding.

2.2 Error Detection and Correction.

2.2.1 Simple codes: The repetition code.

2.3 Block Codes: Introduction and Parameters.

2.4 The Vector Space over the Binary Field.

2.4.1 Vector subspaces.

2.4.2 Dual subspaces.

2.4.3 Matrix form.

2.4.4 Dual subspace matrix.

2.5 Linear Block Codes.

2.5.1 Generator matrix G.

2.5.2 Block codes in systematic form.

2.5.3 Parity check matrix H.

2.6 Syndrome Error Detection.

2.7 Minimum Distance of a Block Code.

2.7.1 Minimum distance and the structure of the H matrix.

2.8 Error Correction Capability of a Block Code.

2.9 Syndrome Detection and the Standard Array.

2.10 Hamming Codes.

2.11 Forward Error Correction (FEC) and Automatic Repeat ReQuest (ARQ).

2.11.1 Forward error correction (FEC).

2.11.2 Automatic Repeat ReQuest (ARQ).

2.11.3 ARQ schemes. Stop and Wait. Go back N. Selective Repeat.

2.11.4 ARQ scheme efficiencies.

2.11.5 Hybrid-ARQ schemes.

Bibliography and References.



3.1 Description.

3.2 Polynomial Representation of Codewords.

3.3 Generator Polynomial of a Cyclic Code.

3.4 Cyclic Codes in Systematic Form.

3.5 Generator Matrix of a Cyclic Code.

3.6 Syndrome Calculation and Error Detection.

3.7 Decoding of Cyclic Codes.

3.8 An Application Example: CRC Code for the Ethernet Standard.

Bibliography and References.



4.1 Introduction: The Minimal Polynomial.

4.2 Description of BCH Cyclic Codes.

4.2.1 Bounds on the error-correction capability of a BCH code: the Vandermonde determinant.

4.3 Decoding of BCH Codes.

4.4 Error Location and Error Evaluation Polynomials.

4.5 The Key Equation.

4.6 Decoding of BCH Codes using the Euclidean Algorithm.

4.6.1 The Euclidean algorithm.

Bibliography and References.



5.1 Introduction.

5.2 Error Correction Capability of RS Codes: The Vandermonde Determinant.

5.3 RS Codes in Systematic Form.

5.4 Syndrome Decoding of RS Codes.

5.5 The Euclidean Algorithm. Error Location and Evaluation Polynomials.

5.6 Decoding of RS Codes using the Euclidean Algorithm.

5.6.1 Steps of the Euclidean algorithm.

5.7 Decoding of RS and BCH Codes using the Berlekamp- Massey Algorithm.

5.7.1 Berlekamp-Massey iterative algorithm for finding the error location polynomial.

5.7.2 Berlekamp-Massey decoding of RS codes.

5.7.3 Relationship between the error location polynomials of the Euclidean and Berlekamp-Massey algorithms.

5.8 A Practical Application: Error-Control Coding for the Compact Disc (CD).

5.8.1 CD Characteristics.

5.8.2 Channel characteristics.

5.8.3 Coding procedure.

5.9 Encoding for RS codes C(RS)(28,24), C(RS)(32,28) and C(RS)(255,251).

5.10 Decoding of RS Codes C(RS)(28,24) and  C(RS)(32,28).

5.10.1 Berlekamp-Massey decoding.

5.10.2 Alternative decoding methods.

5.10.3 Direct solution of syndrome equations.

5.11 Importance of Interleaving Bibliography and References Problems.


6.1 Linear Sequential Circuits.

6.2 Convolutional Codes and Encoders.

6.3 Description in the D-Transform Domain.

6.4 Convolutional Encoder Representations.

6.4.1 Representation of connections.

6.4.2 State diagram representation.

6.4.3 Trellis representation.

6.5 Convolutional Codes in Systematic Form.

6.6 General Structure of FIR and IIR Finite State Sequential Machines.

6.6.1 FIR FSSM.

6.6.2 IIR FSSM.

6.7 State Transfer Function Matrix: Calculation of the Transfer Function.

6.7.1 State transfer function for FIR FSSMs.

6.7.2 State transfer function for IIR FSSMs.

6.8 Relationship between the Systematic and Non-Systematic Forms.

6.9 Distance Properties of Convolutional Codes.

6.10 Minimum Free Distance of a Convolutional Code.

6.11 Maximum Likelihood Detection (MLD).

6.12 Decoding of Convolutional Codes: The Viterbi Algorithm.

6.13 Extended and Modified State Diagram.

6.14 Error Probability Analysis for Convolutional Codes.

6.15 Hard and soft Decisions.

6.15.1 Maximum likelihood criterion for the Gaussian channel.

6.15.2 Bounds for soft decision detection.

6.15.3 An example of soft decision decoding of convolutional codes.

6.16 Punctured Convolutional Codes and Rate Compatible Schemes.

Bibliography and References.



7.1 A Turbo Encoder.

7.2 Decoding of Turbo Codes.

7.2.1 Turbo decoder.

7.2.2 Probabilities and estimates.

7.2.3 Symbol detection.

7.2.4 The log likelihood ratio.

7.3 Markov Sources and Discrete Channels.

7.4 The BCJR Algorithm: Trellis Coding and Discrete Memoryless Channels.

7.5 Iterative Coefficient Calculation.

7.6 MAP BCJR Algorithm and the Log Likelihood Ratio.

7.6.1 The BCJR MAP algorithm: LLR calculation.

7.6.2 Calculation of coefficients γi(u′,u).

7.7 Turbo Decoding.

7.7.1 Initial conditions of coefficients αj-1(u′) and βj(u).

7.8 Construction Methods for Turbo Codes.

7.8.1 Interleavers.

7.8.2 Block interleavers.

7.8.3 Convolutional interleavers.

7.8.4 Random interleavers.

7.8.5 Linear interleavers.

7.8.6 Code concatenation methods. Serial concatenation. Parallel concatenation.

7.8.7 Turbo code performance as a function of size and type of interleaver.

7.9 Other Decoding Algorithms for Turbo Codes.

7.10 EXIT Charts for Turbo Codes.

7.10.1 Introduction to EXIT charts.

7.10.2 Construction of the EXIT chart.

7.10.3 Extrinsic transfer characteristics of the constituent decoders.

Bibliography and References.



8.1 Different Systematic Forms of a Block Code.

8.2 Description of LDPC Codes.

8.3 Construction of LDPC Codes.

8.3.1 Regular LDPC codes.

8.3.2 Irregular LDPC codes.

8.3.3 Decoding of LDPC codes: the Tanner graph.

8.4 The Sum-Product Algorithm.

8.5 Sum-Product Algorithm for LDPC Codes: An Example.

8.6 Simplifications of the Sum-Product Algorithm.

8.7 A Logarithmic LDPC Decoder.

8.7.1 Initialization.

8.7.2 Horizontal step.

8.7.3 Vertical step.

8.7.4 Summary of the logarithmic decoding algorithm.

8.7.5 Construction of the lookup table.

8.8 EXIT Charts for LDPC Codes.

8.8.1 Introduction.

8.8.2 Iterative decoding of block codes.

8.8.3 EXIT chart construction for LDPC codes.

8.8.4 Mutual information function.

8.8.5 EXIT chart for the symbol node decoder (SND).

8.8.6 EXIT chart for the parity check node decoder (PCND).

8.9 Fountain and LT Codes.

8.9.1 Introduction.

8.9.2.Fountain codes.

8.9.3.Linear random codes.

8.9.4 LT codes. LT encoder. LT decoder.

8.10 LDPC and Turbo Codes.

Bibliography and References.


APPENDIX A: Error Probability in the Transmission of Digital Signals.

A.1 Digital Signalling.

A.1.1 Pulse amplitude modulated digital signals.

A.2 Bit Error Rate (BER).


APPENDIX B: Galois Fields GF(q).

B.1 Groups.

B.2 Addition and Multiplication modulo-2.

B.3 Fields.

B.4 Polynomials over Binary Fields.

B.5 Construction of a Galois Field GF(2m).

B.6 Properties of Extended Galois Fields GF(2m). 

B.7 Minimal Polynomials.

B.7.1 Properties of Minimal Polynomials.


Answers to Problems.









1. 信息和編碼理論。
1.1 信息。
1.1.1 信息度量。
1.2 熵和信息速率。
1.3 擴展離散無記憶源。
1.4 信道和互信息。
1.4.1 離散信道上的信息傳輸。
1.4.2 信息信道。
1.5 信道概率關係。
1.6 先驗和後驗熵。
1.7 互信息。
1.7.1 互信息:定義。
1.7.2 互信息:性質。
1.8 離散信道的容量。
1.9 香农定理。
1.9.1 源編碼定理。
1.9.2 信道容量和編碼。
1.9.3 信道編碼定理。
1.10 信號空間和信道編碼定理。
1.10.1 高斯信道的容量。
1.11 錯誤控制編碼的錯誤限制。