Method for changing the redundancy of sequential concatenated code

During the operation of information transmission systems errors occur in the transmitted data due to the influence of various negative factors. To correct errors the codes are currently used that can withstand both single and multiple errors. From the point of view of the ability to correct multiple errors, concatenated codes are highly efficient. When constructing a sequential concatenated code, coding is carried out first by the external, then by the internal code. A significant disadvantage of such coding is a significant increase in redundancy. It is proposed to regulate redundancy by encoding with the internal code only a certain part of the bits from the output of the external code encoder. The study showed the effectiveness of this method, which made it possible to change the parameters of the cascade code within a wide range. A method was proposed and tested to increase the corrective ability of selective coding by multiplying symbols decoded by an external code by coefficients that are different for symbols that have and have not been encoded by an internal code. Decoding of the external code was carried out according to the Viterbi algorithm. The performance of the proposed method has been confirmed experimentally.


Introduction
Recent years have been characterized by intensive growth in the volume of information.Errors occur when storing data and transmitting information over communication networks.Error correction is an important task at many levels of working with information, for which, among other methods, errorcorrecting codes are used [1].In the process of noise-resistant coding, a sequence of information bits is converted into a sequence of code bits that has structural redundancy.The potential correcting ability of the code depends on the degree of introduced redundancy [2,3].Noise-resistant codes are quite diverse and differ in the redundancy introduced, the number of correctable errors, the length of the encoded block of information, the method of encoding, decoding, etc.To improve the correcting ability, noiseimmune codes are combined in various combinations [3,4].In one of the variants of such connections, encoding is carried out first with one code (called external), then the resulting code sequence is encoded with another code (called internal), and decoding is performed in the reverse order, that is, first with the internal decoder, and then with the external code decoder.To avoid error clustering, an interleaver is often included in the cascade code structure.This fairly simple type of code combination is called a sequential concatenated code (SCC).The block diagram of the communication channel with the SCC is shown in Figure 1.It is known that even when connecting two relatively simple codes in the SCC.One can obtain a code with a very high correcting ability [4,5].The negative side of such a connection is a significant increase in redundancy.Let us explain this with an example.Let each of the codes connected in a cascade have a number of code bits twice as large as the number of information bits, that is, the code rate of the internal and external codes is the same and equal to 0.5.Then the code rate of sequential concatenated connection of such codes is 0.25.
The quality of the communication channel can change under the influence of various factors [1,6].The quality of HF and VHF radio communications, especially mobile ones, is highly unstable.It is proposed to regulate the redundancy and correcting ability of the SCC by changing the proportion of code bits of the outer code encoded by the inner code.For convenience, we will call such a SCC a flexible SCC (hereinafter referred to as FSCC).The use of FSCC allows you to smoothly change the information transmission rate depending on the state of the data transmission channel, thus increasing the coding efficiency.When the input of the external code decoder receives a certain amount of symbols from the output of the internal code decoder, then the values of such decoded symbols are more reliable than the values of other symbols that were not encoded/decoded by the internal code.It can be assumed that the higher the coefficient by which the value of the decoded symbol is multiplied, the higher the influence of this symbol on the decoding result.One can consider the greater degree of reliability of symbols additionally encoded with an internal code by multiplying their values by coefficients K>1, or by multiplying the values of less reliable symbols by K<1.It is assumed that in this case the efficiency of the FSCC should increase.The use of similar coefficients when constructing a turbo code with additional coding of information bits is described in the source [7].The publication will study the corrective abilities of the FSCC when varying the number of code bits of the external code encoded by the internal code, including when multiplying the bits encoded by the internal code by the previously described coefficients.

Methods and materials
Let us analyze the construction of the FSCC under study.Let us accept the assumption that the errors present in the symbols at the output of the internal code decoder have become independent after passing through the deinterleaver (see Fig. 1).Then, knowing from the sources [6,8] the values of the bit error probability pV in the decoded information message for a particular internal code, you can simulate its operation by generating the appearance of errors with probability pV at the input of the external code decoder.For the remaining symbols arriving at the input of the external code decoder, bypassing the internal code decoder, we simulate the passage of the data transmission channel, generating errors in accordance with the probability of an erroneous bit appearing in the pB channel.As an external code, we will consider a convolutional code, the decoding of which is carried out in accordance with the widespread Viterbi algorithm [6,8,9].Based on the results of decoding, the correcting ability of the code will be assessed by finding the probability of the occurrence of an erroneous bit in the decoded information message pD.Research will be carried out at different ratios of the number of bits encoded and not encoded by the internal code, and with different values of the coefficient K.
We use a non-systematic convolutional code as an external code, described in detail in many sources [4,6,8], the block diagram of the encoder is shown in Figure 2. When one bit of information arrives at the encoder input, two code bits are formed at the output.Accordingly, when an information sequence, for example, 110101, consisting of six bits, is supplied to the input of the encoder, a code sequence of 11 01 01 00 10 00, consisting of twelve bits, is formed.When transmitting bipolar signals, such a code sequence has the form: 11 -11 -11 -1-1 1-1 -1-1.In the process of concatenated coding, the entire code sequence from the output of the convolutional code encoder or part of it is fed to the input of the inner code encoder.The input of the external code decoder receives bits both from the data transmission channel (from the output of the receiver detector) and from the output of the internal code decoder.A hypothesis was put forward about the influence of the values of the previously described coefficients K on the decoding result.Let us confirm this assumption by considering practical examples of decoding external code.When the detector makes a hard decision about the meaning of a symbol and the presence of two errors in the received code sequence, the input sequence of the convolutional code decoder has the form: 11 -11 11 -1-1 11 -1-1.For convenience, the meanings of the erroneous bits are underlined here and below.Let us decode the received code sequence according to the Viterbi algorithm, schematically depicting the trellis decoding process in Figure 3.The following designations are used in the decoding diagram: dotted lines -allowed transitions along the trellis, pairs of numbers on the dotted lines -the values of the branches corresponding to the bits generated by the encoder, numbers in lattice nodes -the state of the encoder memory cells, numbers next to the lattice nodes -the values of the path metrics.The decoding process is iterative.During each iteration, a pair of bits is decoded.The decoded sequence, divided into pairs of bits, is shown in italics at the top of the diagram.The path along the grid corresponding to the transmitted code sequence is highlighted with a solid thick line.The decoder output is generated based on the path that has the smallest metric.In the example considered, two paths at once have a minimum metric, therefore, the code sequence can be decoded erroneously.Errors in symbols additionally encoded with an internal code are unlikely, but possible, and their occurrence when using coefficients more often leads to decoding errors.Let us confirm what has been said with an example demonstrating erroneous decoding when there are two errors in the code sequence.Let the first error in the considered input sequence be located in the same third pair of bits, but in a bit with K = 1, and the second error, as in the previous example, in the tenth bit.Then the decoded sequence has the form: 0.50.5 -0.50.5 -0.5-1 -0.5-0.5 0.50.5 -0.5-1.The decoding process is shown schematically in Figure 5.The path with the minimum metric differs from the transmitted code sequence.Decoding has got errors.Having considered the presented examples of decoding, we can conclude that there are certain qualitative changes in the results of FSCC decoding when changing the number of bits encoded by the internal code and the use of coefficients.It is advisable to conduct studies that provide quantitative information about the pD value of FSCC depending on the following parameters: probability of error in the data transmission channel pB; probability of error at the output of the internal decoder pV; of values K; the number of symbols from the output of the internal code present in the input sequence of the external code.In order to implement the stated research for the considered external code in the Python programming language, a software simulator was created that simulates the operation of the FSCC, taking into account the accepted assumptions.The software simulator contains the following main components: information sequence generator; external code encoder; block for modeling the output of a receiver detector with a given probability of an erroneous bit pB; block for simulating the output of an internal code decoder with the probability of an erroneous bit pV; external code decoder.In the software simulator, the generation of an information sequence and the generation of an error vector with a given probability are carried out using the "scipy" library and its "stats" subpackage, which contains statistical functions [10,11].The operation of a similar software simulator is described in more detail in [12].
Unlike the considered examples of decoding, in the software simulator, instead of multiplying by reduction factors the values of symbols that were not encoded with an internal code, multiplication by a coefficient K>1 of the values of symbols encoded with an internal code was implemented.As an internal code, we will alternately use one of three well-known codes: Hamming, Golay and Reed-Solomon.Let us give a short description of the selected codes.The Hamming code codeword consists of four information and three check bits.This Hamming code is capable of correcting one error in the codeword.The Golay code codeword consists of twelve information and eleven check bits.The Golay code is capable of correcting three errors in the code word in any location.Reed-Solomon code is a non-binary code.In our case, we will consider the Reed-Solomon code, the code word of which consists of five information and four check bytes.This code can correct any two erroneous bytes.The Reed-Solomon code is effective in channels with a high degree of error clustering and has low efficiency in the presence of a larger number of independent errors.Let's consider the correcting ability of the three described codes.Table 1 shows the bit error probability values in the presence of independent errors in the channel for selected internal codes.The values given in the table are taken from known sources [6,8].Reed-Solomon 2,7•10 -1 4,9•10 -3 1,0•10 -5 9,1•10 -9

Results and discussion
During the research, bits were selected for encoding with the inner code for each: tenth, eighth, sixth, fourth and second bit of the sequence encoded using the outer code.Let us denote the encoding period with the outer code as N.Then, for example, with N = 2, every second bit is encoded with the inner code (every other), and with N = 4, every fourth bit is encoded with the inner code (every other), etc.
The results of the FSCC study for the case of using the Reed-Solomon code, the Hamming code and the Golay code as an internal code are given in Tables 2, 3 and 4, respectively.The tables in parentheses show the specific values of K for which the best correction power results were obtained.Having analyzed the results, we can conclude that it is possible to change the correction ability within a wide range by encoding only a certain part of the bits with an internal code.The range of such changes increases as the probability of an erroneous bit in the sequence decoded by the external code decreases.Applying a multiplying factor to internally encoded symbol values results in increased correction power.The effect of using coefficients increases with a decrease in the probability of an erroneous bit in the decoded sequence, as well as with an increase in the difference between the pB and pV values.

Conclusion
The study showed the possibility of changing the parameters of a sequential concatenated code within a wide range, encoding with an internal code only a certain part of the bits from the output of the external code encoder.The simulation demonstrated a significant increase in the correction ability when multiplying symbols decoded by an external code by coefficients that are different for symbols that have and have not been encoded by an internal code.Elements of novelty of the presented publication have both the method of sequential concatenated coding when encoding part of the bits from the output of the external code encoder with an internal code, and the use of multiplying the values of symbols decoded by the external code by coefficients.The results obtained during the study can be used in the development and construction of various systems for transmitting and processing information.

Figure 1 .
Figure 1.Block diagram of the SCC.

Figure 2 .
Figure 2. Structure of a non-systematic convolutional code encoder.

Figure 3 .
Figure 3. Structure of decoding a codeword containing two errors.

Figure 5 .
Figure 5. Structure of bit error codeword decoding with K = 1.

Figure 6 .
Figure 6.Structure of codeword decoding in the presence of three errors.

Table 1 .
Probability of occurrence of an erroneous bit in a decoded information message.

Table 2 .
Probability of an erroneous bit in a decoded information message for an internal Reed-Solomon code.

Table 3 .
Probability of an erroneous bit in a decoded information message for an internal Hamming code.

Table 4 .
Probability of an erroneous bit appearing in a decoded information message for an internal Golay code.