Page 87 - Kaleidoscope Academic Conference Proceedings 2021
P. 87
Connecting physical and virtual worlds
2. Therefore, we can consider the same matrix H 8 to model 4. OPTIMAL PILOT DESIGN
all covariance correlation matrices. And that will leads to
very good results of applying the proposed SBL technique for In (12), we observe that the MSE of the proposed SBL
FDD massive MIMO systems in contrast to the other SBL estimation algorithm depends on the choice of the pilot
published in the literature. sequence. Hence, in this section, we develop an optimal pilot
The conjugate priorof Gaussian distribution times Gaussian design to minimize the mean square error of the proposed
distribution is a Gaussian distribution. Thus, based on Bayes’ estimator, i.e., 5 (A) defined in (12) subject to the transmit
rule, the posterior %(h|y, ", W, H 8 ) ∼ N (-, ) follows a power constraint at each mobile user. To that end, we introduce
Gaussian distribution with its mean and covariance given the following optimization problem as follow:
respectively by
) min 5 (A) ,
- = W A y, (7) A
(17)
s. t. Tr A A ≤ ? A .
)
) −1
= (D + W(A) A) , (8)
Introducing an auxiliary variable G. So, we recast (17) as
where D is
min Tr (G) ,
A,G
U 1 H 1
. )
. (9) s. t. Tr A A ≤ ? A ,
D = . (18)
−1
G D + WA A .
U " , H " )
+
)
- = W A y, (10) )
Let L = A A of size ! × !, i.e., L 0 and rank(L) = !,
ˆ
The maximum a posterior (MAP) estimate of h is the mean (18) can be equivalently cast as
of its posterior distribution, i.e.,
min Tr (G) ,
ˆ ) −1 −1 ) L,G
h = - = ((A) A + W J) (A) y. (11)
s. t. Tr (L) ≤ ? A ,
While the Mean Square Error (MSE) of the proposed −1
G (D + WL) , (19)
estimator is given as
L 0,
) −1
"( = = CA{(D + W(A) A) }. (12) rank(L) = !.
Adopting the Schur’s complement [20] and dropping the
3.2 Hyperparameters Estimation
rank constraint, we recast (19) as
ˆ
To obtain the term h, we need to jointly estimate the
min Tr (G) ,
hyperparameters ", W and H 8 , which can be achieved via L,G
the Type-II maximum likelihood approach by employing the s. t. Tr (L) ≤ ? A ,
Expectation-Maximization (EM) optimization to maximize (20)
D + WL I
the posterior %(h|y, ", W, H 8 ), where that is equivalent to 0,
I G
minimizing the following cost function
L 0.
) ) )
L(\) = ;>6|D + W(A) A| + y (D + W(A) A)y, (13)
Optimization problem (20) is in Semi-Definite
where \ denotes all the hyperparameters, i.e. \ = Programming (SDP) form [20] which is convex and
{W, {H 8 , U 8 } " }. So, the learning rule of U 8 , W 8 and H 8 , can can be solved by an optimization package such as CVX [20]
8=1
¢
¢
be given as (for detailed derivations of the following equation, to obtain optimal L . Having L , the pilot design matrix A
see [18]) can be attained by solving the following problem:
¢ ) ¢ 2
A = arg min ||A A − L || . (21)
−1
8
8 )
8
CA(H (J + - (- ) )) A
=4F
U = , 8 = 1, ..., " (14)
¢
8 Let the eigenvalue decomposition of L be
!
¢
L = QΛQ , (22)
" 8
1 Õ (J + - (- ) )
8 )
8
H =4F = , 8 = 1, ..., " (15) where Q ∈ C " !×" ! is a unitary matrix whose columns
8
" U 8
¢
8=1 are eigen-vectors of L , Λ = diag{ _ 1 , _ 2 , · · · , _ " ! }, i.e.,
_ 1 ≥ _ 2 ≥ · · · ≥ _ " ! ≥ 0, is a diagonal matrix whose
2
−1
¢
||y − Gh|| + W["! − CA(J(- ))] diagonal elements are the corresponding eigenvalues of L .
=4F 2
W = , (16) According to [21], the closed form solution to (21) is
#!
8
where - is the corresponding 8-th block in -, and J is the
8
√ √ √
corresponding 8-th diagonal block in J. A = V diag{ _ 1 , _ 2 , · · · , _ ! } Q , (23)
¢
– 25 –