Binary BCH (15,7,5) work out
This is a math workout problem that is the math part of my FPGA implementation of forward error correction code (FEC). Build on my earlier blog of a Hamming cyclic code of \((n,k,d)=(15,11,3)\), I will expand its capability to a binary BCH \((15,7,5), t=2\) code based on the same primitive polynomial \(h(x)=1+x+x^4\).
One important class of cyclic codes is BCH codes. It is quite extensive and easy to decode and because of that it is quite practical for its use in real-life application. By definition, any \(r \ge 3, t \leq 2^{r-1}-1\) there exists BCH error-correcting codes of length \(n=2^r -1\), parity check length \((n-k) \le rt\), minimum distance \(d \ge 2t + 1\) and having dimension \(k \geq n-rt\). The \(t=1\) error-correcting BCH codes of length \(n=2^r-1\) is a Hamming code (old post).
Some more field algebra
The minimal polynomial of \(\alpha\), \(m_{\alpha}(x)\) is the smallest polynomial in \(K[x]\) of smallest degree having \(\alpha\) as a root, that is, if \(f(x)=a_0+ a_1x+a_2x^2+..+a_kx^k\), that is \(f(\alpha)=a_0+a_1\alpha+a_2\alpha^2+..+a_k\alpha^k = 0\)
The order of nonzero element \(\alpha\) in \(GF(2^r)\) is the smallest positive \(m\) such that \(\alpha^m = 1\). If \(\alpha\) is of order \(2^r-1\) then it is a primitive element. It not easy to find the primitive polynomial of higher degree. Luckily the hard part of finding them has been done and tabulated for our use and this is what I will use for my implementation.
If \(\alpha\) has order \(m\), then \(\alpha\) is a root of \(1+x^m\). Relative to my previous post, defining \(\beta\) as primitive element, \(\alpha=\beta^i\), then \(\alpha^{2r-1}=(\beta^i)^{2r-1}= (\beta^{2r-1})^i = 1^i = 1\). \(\alpha\) is therefore a root of \(1+x^{2r-1}\) and \(m_\alpha(x)\) is a factor of \(1+x^{2r-1}\). In the last post \(g(x) | (1+x^{15})\). \(g(x)\) must be the least common multiple of \(m_i(x),m_2(x),..,m_{2t}(x)\). Taking the conjugate roots into account, the t-error-correcting BCH of code length \(n=2^r-1\) has its generator \(g(x)\) reduced to,
Since the degree of of each minimal polynomial is \(m\) or less, \(g(x)\) has degree at most \(mt = (n-k)\), its parity check digits. Base on the theorem that the set of all roots of \(m_\alpha(x)\) is \(\{\alpha,\alpha^2,\alpha^4,..,\alpha^{2t-1}\}\), the single error-correcting BCH codes will have the generator polynomial, \(g(x) = m_{\alpha^1}(x)\). The two error-correcting codes will have the generator polynomial, \(g(x) = m_\alpha(x)m_{\alpha^3}(x)\).
The elements of \(GF(2^4)\) based on \(p(x)=x^4+x+1\) as generated in the old post is again tabulated below,
word |
\(x^i\ mod\ (1+x+x^4)\) |
power of \(\alpha\) |
0000 |
||
1000 |
1 |
\(\alpha^0\) |
0100 |
\(x\) |
\(\alpha^1\) |
0010 |
\(x^2\) |
\(\alpha^2\) |
0001 |
\(x^3\) |
\(\alpha^3\) |
1100 |
\(x^4 \equiv 1+x\) |
\(\alpha^4\) |
0110 |
\(x^5 \equiv x+x^2\) |
\(\alpha^5\) |
0011 |
\(x^6 \equiv x^2+x^3\) |
\(\alpha^6\) |
1101 |
\(x^7 \equiv 1+x+x^3\) |
\(\alpha^7\) |
1010 |
\(x^8 \equiv 1+x^2\) |
\(\alpha^8\) |
0101 |
\(x^9 \equiv x+x^3\) |
\(\alpha^9\) |
1110 |
\(x^{10} \equiv 1+x+x^2\) |
\(\alpha^{10}\) |
0111 |
\(x^{11} \equiv x+x^2+x^3\) |
\(\alpha^{11}\) |
1111 |
\(x^{12} \equiv 1+x+x^2+x^3\) |
\(\alpha^{12}\) |
1011 |
\(x^{13} \equiv 1+x^2+x^3\) |
\(\alpha^{13}\) |
1001 |
\(x^{14} \equiv 1+x^3\) |
\(\alpha^{14}\) |
Note that \(\alpha^{15} = \alpha^0 = 1\).
Next is to get the generator polynomial by performing the calculation for minimal polynomials, \(m_1(x)=p(x)\) and \(m_3(x)\), that is the root based on \(\alpha\) and \(\alpha^3\) because I need \(g(x)=m_1(x)m_3(x)\) for this exercise. I can avoid the tedious calculation by opting to use the tabulated generator polynomials of \((n,k,d)\) BCH code. For the \((15,7,5), t=2\), \(m_3(x)=x^4 + x^3 + x^2 + x +1\)
Using MATLAB/Octave's conv to multiply the polynomial,
m1=[1 0 0 1 1]; m3=[1 1 1 1 1]; gx=mod(conv(m1,m3),2);
This matches to the tabulated octal table value, \((721)_8 =\) 111_010_001 for this code. I can verify that \(g(x) | (x^{15} + 1)\) using Octave's deconv operation. This is to confirm that any irreducible polynomial over GF(2) of degree \(m\) divides \(X^{2^m-1}+1\).
b'\nn "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">\n\n\n\n'
By definition, a t-error-correcting BCH code of lengt \(2^m-1\) having a binary n-tuple \(u(X)=u_0+u_1+\cdots+u_{n-1}\) is a code word iff \(u(X)\) has \(\alpha,\alpha^2, \cdots,\alpha^{2t}\) as roots, that is,
and for this exercise,
note that the power of \(\alpha\) will wrap on this finite field, for example, \(\alpha^{18} = \alpha^{15} \alpha^3 = \alpha^{3}\). Put it in matrix form,
The equation above is in the form,
where \(H^t\) is the transpose of the parity check matrix. For this \((15,7,5)\) BCH code, it is
The power of \((\alpha^3)^i\) can be easily computed from power of \(\alpha\), for example, \((\alpha^3)^9 = \alpha^{27} = \alpha^{15} \alpha^{12} = \alpha^{12}\)
The generator is then obtained from the generator polynomial, \(g(x)\)
Because of the orthagonality between \(G\) and \(H\), \(GH^t = 0\). The encoded codeword of \(u\) for this \((n,k,d)=(15,7,5)\) is,
where \(v\) is the 7-bit source code word to be encoded.
Performance of this (15,7,5) binary BCH code
Weight distribution of this (15,7,5) code,
weight |
number of code words |
0 |
1 |
5 |
18 |
6 |
30 |
7 |
15 |
8 |
15 |
9 |
30 |
10 |
18 |
15 |
1 |
From table above, it is easy to see that its minimum distance from the zero code word is 5. Base on the weight distribution, for the number of code words, \(A_j\) having weight \(j\) and bit error probailitity \(p\),the probability that it cannot detect the error is,
The term having the minimum weight would be the dominant term so if \(p=10^{-2}\) or one percent of error, the probability that it cannot detect the error would come out to be \(P_{notdetect} \approx 1.6279 \times 10^{-9}\) ie.. less then one in a billion compare to the uncoded word, \(P_{notdetect} \approx 6.5904 \times 10^{-2}\). Base on this \(p\) value, the coded word is 40 million times better when it comes to error detection.
For a block of \(n=15\) bits, having \(j\) errors, the probability of block error or message error,
Decoder and errors locator
From the row of \(H^t\), there are \(2^{15}\) syndromes and \(1+\binom{n}{1} + \binom{n}{2} = 121\) correctable error patterns for this implementation.
If \(s_i:i=1,3\) are the syndromes each having 4 bits and representing the columns of the transpose parity check matrix, \(H^t\),
and \(w\) is the received coded word, then \(wH^t=[w(\alpha), w(\alpha^3)] = [s_1, s_3]\) is the syndrome of this code word. For a single bit error, \(e(x)=x^i\), the syndrome is \(wH^t=[(\alpha)^i,(\alpha^3)^i]\). If there are two errors in the code word, \(e(x)=x^i+x^j, i\neq j\), the syndrome becomes \([s_1,s_3]=[(\alpha)^i+\alpha^j,(\alpha^3)^i+(\alpha^3)^j]\). The equations for error-locating position,
Substitute \(s_1 + \alpha^i = \alpha_j\) into \(s_3\),
Locating error bit position is to find which \(i\) that makes \(s_1^3 + s_1 \alpha^{2i} + s_1^2 \alpha^i + s_3 = 0\). Higher \(t\) leads to more messy equations, for example, the \(t=3\) will also have \(s_5\) syndrome component. In addition to that, each of these syndrome components will have one extra bit variable, ie.. \(s_1 = \alpha^i + \alpha^j + \alpha^l\). As you can see, the complexity multiplies many folds. This is why I choose lower \(t\) to avoid putting myself into the deep hole that I cannot get out.
To test the error correction capability of this exercise I will use code word u(100) and alter bit 12 and bit 13 to simulate error,
v=dec2bin(0:2^7-1)-'0'; % input code word u=mod(v*G,2); % BCH coded word w=u(100,:);w(2)=0;w(3)=1; % alter bit 13,12 for error bits mod(w*h,2)
The error syndrome from mod() operation is \(S=\{0010,0110\}\). This corresponds to \(S=\{\alpha,\alpha^5 \}\) from the tabulated power of \(\alpha\) above. The equation for this syndrome is then,
Using \(i=12\),
Substitute the values for power of \(\alpha\),
This agrees with what is simulated for bit 12. Bit position 13 will produce the same result. This is known as Chien search algorithm.
The output from \(wH^t\) produces the syndrome identical to sum of two \(H^t\) rows, row two and row three. The corrected code word is the sum of the received code word and the rows of the \(I\) matrix correspond to the parity check matrix.
Another algorithm is by formulating syndrome based on the minimal polynomials, instead of the generator polynomial. The syndrome components for this exercise using this algorithm would be, \(S=\{s_1,s_2,s_3,s_4\}\) where \(s_1, s_2, s_4\) is obtained from \(m_1(x)\) and \(s_3\) is obtained from \(m_3(x)\) ( \(s_2\) and \(s_4\) are the power of \(s_1\)). From this, the error locating polynomial is formed. From this polynomial, the connection polynomial is then formed and tranfer to the frequency domain system of equation, the key equation ie.. \(\sum_{j=0}^{t} \Lambda_j E_{k-j}=0\).
While there are several algorithms for error locating, they are not easy for hardware implementation. They work well on pencil and paper, yet I am still trying to figure out how to translate it into hardware. One possible algorithm that I like is this, let \(w(x)\) be the received code word where \(w(x)=u(x)+e(x)\). \(u(x)\) and \(e(x)\) are the transmitted code word and error respectively.
. Calculate syndrome \(s(x) = w(x)\ mod\ g(x)\)
. For \(i \ge 0\), calculate \(s_i(x)=x^i s(x)\ mod\ g(x)\) until \(s_j(x)\) is found where weight of \(s_j(x) \le t\).
. Once \(s_j(x)\) is located, \(e(x)=x^{n-j}s_j(x)\ mod\ (x^n + 1)\) are the most likely error bits.
Every algorithm is iterative. The iteration for this one is at most \(2t\) because if there are \(\nu \le t\) error positions, the iteration is \(\nu + t\).
Again, assume the transmitted code word \(u(100)=110\_0011\_0011\_1110\)
that is received having 2 bits error at bit 12 and bit 11, \(101\_0011\_ 0011\_1110\)
The computed syndrome is,
weight of \(s_3(x) \le t\) is reached, so \(j=3\),
The most likely code word is therefore, \(w(x)+s(x)\). This algorithm gives me both bit positions. \(s_i(x)\) are the shifted version of \(s_{i-1}(x)\) modulo of generator. I think I can reuse the circuit for these operations.
How to detect an uncorrectable code word ? For this implementation, if the iteration exceed 4, then declare error. I experiment with larger than \(t\) errors and I find out that the process just go on an on withouth reaching minimum weight. Any solution for any algorithm needs to take into account that there is the possibility that there may be fewer errors than the maximum correctble errors.
For fun and for speed I will have an HDL implementation of this algorithm when time permits and I will update this post with the link to it.
There are many excellent text books and articles on this subject. Listed in the reference are only a few that I have. For EE, [CIT003] is a very well known text book on this subject.
Reference
- CIT001
-
Digital Communications Fundamentals and Applications, 2nd Ed, Bernard Sklar.
- CIT002
-
Coding Theory The Essentials, D.G Hoffman, 1991.
- CIT003
-
Error Control Coding Fundamental And Applications, Shu Lin, Daniel J. Costello Jr, 1983