Page 49 - ITU Journal Future and evolving technologies Volume 2 (2021), Issue 6 – Wireless communication systems in beyond 5G era
P. 49
ITU Journal on Future and Evolving Technologies, Volume 2 (2021), Issue 6
The function in (19) and functions , and in (20) The DEF decoder (see Fig. 3) maps the received DEF code‑
2
1
3
4
are de ined as follows: word to a decoded message ̂ m as follows:
(1)
(i , h −1 ) = (W h + Y i + b ) (21) ̂ m = ( ̄x, ̄ p , … , ̄ p ( ) ). (25)
1
1 −1
1
1
(i , h −1 ) = (W h + Y i + b ) (22) The decoder consists of a bidirectional recurrent NN (a
2
2 −1
2
2
(i , h −1 ) = (W h + Y i + b ) (23) GRU or LSTM) followed by a linear transformation and a
3 −1
3
3
3
(i , h −1 ) = tanh(W h + Y i + b ) (24) sigmoid function. The bidirectional recurrent NN com‑
4
4
4 −1
4
′
putes a sequence of forward‑states h and backward‑
″
In equations (21)‑(24), matrices W , W , W , W , Y , Y , states h as follows:
4
1
1
2
3
2
Y , Y and vectors b , b , b , b are obtained by NN train‑
4
4
1
3
2
3
ing. h ′ = ( ̄y , h ′ ) (26)
′
−1
″
″
h ″ −1 = ( ̄y , h ) (27)
2.4 Mitigation of unequal bit error
′
″
distribution where functions , are de ined as in (15) for the GRU‑
based decoder and as in (19) for the LSTM‑based decoder,
It has been observed in [1] that the feedback codes based and the input column vector ̄y is de ined as follows:
on RNNs exhibit a non‑uniform bit error distribution, i.e.,
the inal message bits typically have a signi icantly larger ̄ x( − ∶ )
0
error rate compared to other bits. In order to mitigate the ⎡ ̄ q ( − ∶ ) ⎤
0
1
detrimental effect of non‑uniform bit error distribution, ̄ y = ⎢ … ⎥ , (28)
⎢
⎥
[1] introduced two countermeasures: ⎣ ̄ q −1 ( − ∶ ) ⎦
• Zero‑padding. Zero‑padding consists in appending at where ̄x( − ∶ ) is a column vector of length + 1
0
0
least one information bit with prede ined value (e.g., which contains symbols from the received systematic se‑
zero) at the end of the message. The appended infor‑ quence ̄x of (1), and ̄ q ( − ∶ ), = 0, … , − 1, is a
mation bit(s) are discarded at the decoder, such that column vector of length + 1 containing symbols from
the positions affected by higher error rates carry no the sequence ̄ q , which consists of the symbol of each
th
information.
received parity sequence ̄ p (5). ̄ q is de ined as follows:
• Power reallocation. Zero‑padding alone is not
0
enough to mitigate unequal errors, and moreover it ̄ q ≜ ( ̄ ( ), … , ̄ −1 ( )), = 0, … , − 1. (29)
reduces the effective code rate. Instead, power re‑ Finally, the values , … , are arbitrary non‑negative in‑
0
allocation redistributes the power among the code‑ tegers, hereafter called the decoder input extensions. The
word symbols so as to provide better error protec‑ initial forward NN state h and the initial backward NN
′
0
tion to the message bits whose positions are more state h are set as all‑zero vectors.
″
error‑prone, i.e., the initial and inal positions. th
The decoder output is obtained as follows:
̃ ′ ̃ ″
2.5 DEF decoder ̂ m = ℎ(h , h −1 ) ≜ (C [ h ̃ ′ ] + d) , (30)
̃ ″
h
In DNN‑based codes, encoder and decoder are imple‑ −1
mented as two separate DNNs whose coef icients are de‑ where (⋅) is the sigmoid function, C is a matrix of size
termined through a joint encoder‑decoder training proce‑ /2 × 2 , and d is a vector of size /2. C and d are ob‑
0
̃ ″
̃ ′
dure. Therefore, the encoder structure has impact on the tained by NN training. Vectors h and h are obtained by
″
′
decoder coef icients obtained through training, and vice‑ normalizing vectors h and h so that each element of h ̃ ′
̃ ″
versa. In that sense, the chosen decoder structure has im‑ and h has zero mean and unit variance. Vector ̂ m pro‑
pact on the resulting code. vides the estimates of the message bits in a corresponding
/2‑tuple, that is:
′ −1 D ̂ m = ( ̂ ( /2), … , ̂ (( + 1) /2 − 1)). (31)
′ ′
′ Norm. The Deepcode decoder from [1] is recovered by setting
= 0, = 0, 1, ..., in (28).
′′ D -1 ℎ 3. TRANSCEIVER TRAINING
+1
′′ ′′ The coding and modulation schemes used in conventional
′′ Norm. communication systems are optimized for a given SNR
range. We take the same approach for DNN‑based codes:
Fig. 3 – DEF decoder. as DNN code training produces different codes depending
© International Telecommunication Union, 2021 37