báo cáo hóa học:" Research Article Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation"

pdf
Số trang báo cáo hóa học:" Research Article Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation" 19 Cỡ tệp báo cáo hóa học:" Research Article Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation" 1 MB Lượt tải báo cáo hóa học:" Research Article Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation" 0 Lượt đọc báo cáo hóa học:" Research Article Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation" 1
Đánh giá báo cáo hóa học:" Research Article Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation"
4.2 ( 5 lượt)
Nhấn vào bên dưới để tải tài liệu
Đang xem trước 10 trên tổng 19 trang, để tải xuống xem đầy đủ hãy nhấn vào bên trên
Chủ đề liên quan

Nội dung

Hindawi Publishing Corporation EURASIP Journal on Advances in Signal Processing Volume 2009, Article ID 178785, 19 pages doi:10.1155/2009/178785 Research Article Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation Kenneth W. K. Lui (EURASIP Member) and H. C. So Department of Electronic Engineering, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong Correspondence should be addressed to H. C. So, hcso@ee.cityu.edu.hk Received 19 February 2009; Revised 8 September 2009; Accepted 20 November 2009 Recommended by T.-H. Li We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed signals. By relaxing the nonconvex ML formulations using semidefinite programs, high-fidelity approximate solutions are obtained in a globally optimum fashion. Computer simulations are included to contrast the estimation performance of the proposed semidefinite relaxation methods with the iterative quadratic maximum likelihood technique as well as Cramér-Rao lower bound. Copyright © 2009 K. W. K. Lui and H. C. So. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 1. Introduction The problem of sinusoidal parameter estimation in additive noise has been an important research topic because of its numerous applications in science and engineering [1–5]. The typical parameters of interest are frequencies, amplitudes, and phases. As the frequencies are nonlinear functions in the received noisy measurements, they are usually estimated prior to the amplitude and phase estimation. In this paper, we focus on optimal parameter estimation for the representative signal models of single complex/real tone, multiple complex sinusoids, and single two-dimensional (2D) complex tone, although the development can also be extended to other related models [6–13] in the literature. Utilizing the linear prediction polynomial (LPP) of the noise-free signal, an ML estimator for sinusoidal parameter estimation has been proposed in [7, 8] under the standard assumption of Gaussian noise. The nonlinear least squares (NLSs) estimator [14] will also produce ML estimation performance when the noise is white. In particular, the ML frequency estimate of a single complex sinusoid in white Gaussian noise can be obtained from the periodogram peak [2, 15]. However, the cost functions of all these ML methods are multimodal and two steps are typically involved in the estimation procedure. First suboptimal initial parameter estimates are obtained by discrete Fourier transform (DFT), linear prediction [16], or other means [7, 8]. A refinement is then made through an iterative optimization of the cost functions. As a result, this means that sufficiently accurate initial estimates are necessary for global convergence. In this work, we approximate the nonconvex ML formulations as convex optimization problems so that a high-quality global solution is guaranteed. In fact, convex optimization [17– 19] has been applied in a number of signal processing and communication applications [20–28]. Phase-shift keying demodulation, which is a Boolean quadratic problem with polynomial constraints, is studied in [20]. Multiple-inputmultiple-output detection, which is a closet lattice point search problem, is examined in [21–23]. In addition, the problems of multigroup multicast transmit beamforming, digital filter design and localization are considered in [24– 28] respectively. These problems are nonconvex in nature but can be approximated as constrained convex optimization programs where the objective functions are convex while the inequality constraints are convex and the equality constraints are linear. The rest of the paper is organized as follows. We start with the simplest single complex tone model corrupted by 2 EURASIP Journal on Advances in Signal Processing white Gaussian noise in Section 2. Through semidefinite relaxation (SDR), the periodogram, LPP and NLS estimators are approximated as convex programs. In the first two SDR schemes, the frequency parameter is obtained prior to amplitude and phase estimation while the third one is a waveform estimator. In Section 3, the SDR approximations of the LPP and NLS estimators for a noisy real sinusoid are derived. Section 4 extends the SDR-LPP methodology in Sections 2 and 3 to the multiple tone model. An illustration for the dual tone case is provided. The SDR formulations of the periodogram and NLS estimators for a 2D complex tone are derived in Section 5, while the LPP version is omitted as its development is too tedious. Fast implementation of some developed SDR solutions for large observation records is suggested in Section 6. In Section 7, numerical examples are included to contrast the estimation performance of the proposed SDR methods with the iterative quadratic maximum likelihood (IQML) technique [7, 8] as well as Cramér-Rao lower bound (CRLB). It is worthy to point out that although the algorithms can work for noises which are non-Gaussian and/or colored, we keep the assumption of white Gaussian noise so that their optimality can be easily assessed by comparing with the benchmark of CRLB. Finally, conclusion is drawn in Section 8. A list of symbols that are used in the paper is given in Table 1.  where Y(ω) = Nn=1 y(n) exp{− jωn} is the discrete-time Fourier transform (DTFT) of y(n). The difficulty of solving (3) lies in its multimodality. A conventional strategy is first to obtain an initial estimate from the DFT peak and then apply a gradient search for (3). Here, (3) will be relaxed as a convex optimization problem so that a global high-quality solution is guaranteed. First, it is clear that (3) is equivalent to the following constrained problem: max c,ω  ⎡ ⎢ ⎢ ⎢ ⎢ C=⎢ ⎢ ⎢ ⎣ s(n) = A exp j ωn + φ . (2) 2.1. Periodogram Approach. In this subsection, we assume that {η(n)} are independent and identically distributed (IID), that is, Ση = ση2 IN where ση2 is unknown. In this case, the ML frequency estimate can be obtained from the periodogram maximum [15]:  ω 2 c(1) 1 · · · c(N − 1) ∗⎤ (5) max yH Cy  The A > 0, ω ∈ (−π, π) and φ ∈ [0, 2π) are deterministic unknown constants which denote the sinusoidal amplitude, frequency, and phase, respectively, while the additive noise η(n) is a zero-mean complex Gaussian process. It is assumed that the covariance matrix of the noise vector η = [η(1), η(2), . . . , η(N)]T , denoted by Ση = E{ηηH }, where E is the expectation operator, is known up to a scalar. Our objective is to estimate A, ω, and φ from the N noisy measurements of y(n). Three SDR methods namely, periodogram, PPL, and NLS, are proposed to perform the estimation as follows. maxY(ω) , c(1)∗ Then (4) is equivalent to (1) where  1 ⎥ ∗ · · · c(N − 2) ⎥ ⎥ ⎥ ⎥ .. .. .. .. ⎥ ⎥ . . . . ⎦ c(N − 1) c(N − 2) · · · 1   ∗ ∗ ∗ . = Toeplitz 1, c(1) , c(2) , . . . , c(N − 1) C,c   (4) n = 1, 2, . . . , N, = [y(1), y(2), . . . , y(N)]T and c = where y T [c(1), c(2), . . . , c(N)] . The constraints make (4) a nonconvex program as exp{·} is a periodic function. Furthermore, if the constraints are removed, maximization 2 of the convex objective function |cH y| = yH ccH y will result in c(n) → ∞ as the function is not bounded above. To avoid this, we define C = ccH ∈ CN to make the objective function 2 affine and now |cH y| is rewritten as yH Cy which is linear in C. Using the sinusoidal property of c(n), it is easy to show that C has a special structure of the form: In this section, we tackle the simplest form of the sinusoidal estimation problem, namely, finding the parameters of a complex tone in additive noise. The discrete-time signal model is n = 1, 2, . . . , N,  s.t. c(n) = exp jωn , 2. Single Complex Tone y(n) = s(n) + η(n),    H 2 c y  (3) s.t. C = ccH = Toeplitz 1, c(1)∗ , c(2)∗ , . . . , c(N − 1)∗  . (6) Finally, relaxing C = ccH to C  ccH yields the SDR formulation for single complex tone frequency estimation: max C,c yH Cy ⎡ s.t. ⎣ C c ⎤ ⎦  0N+1 , (7) cH 1  C = Toeplitz 1, c(1)∗ , c(2)∗ , . . . , c(N − 1)∗  . Note that no constraints are imposed on c(n), n = 1, 2, . . . , N − 1. In the optimization literature, there are readily available solvers for finding the globally optimum solution for (7), such as SeDuMi [29] and SDPT3 [30, 31], and they basically employ the interior-point methods [32, 33]. As the estimate of c, namely, c, is constrained as a unity-amplitude tone with zero initial phase, the frequency estimate, denoted  is computed as by ω, ω  = ∠c(1). (8) EURASIP Journal on Advances in Signal Processing 3 Table 1: List of symbols. Symbol CN Meaning Set of N × N real matrices Set of N × N complex matrices [a]i [A]i, j ith element of vector a (i, j) entry of matrix A AT AH A∗ Transpose of A Hermitian transpose of A Conjugate of A tr(A) AB Trace of A A − B is positive semidefinite ∠(a) 1N 0N Angle of complex variable a Square matrix with size N × N of 1 Square matrix with size N × N of 0 IN diag(a) Identity matrix with size N × N Diagonal matrix with vector a as main diagonal diag(A, k) blkdiag(A1 , A2 , . . . , An ) Toeplitz(a) Column vector formed by the elements of kth diagonal of A Block diagonal matrix with its diagonal elements are square matrices of A1 , A2 , . . . , An Symmetric or Hermitian Toeplitz matrix formed by a, which is the first row  A E{A}  A Estimate of A Expectation of A Round elements of A to the nearest integers towards −∞ sign(A) vec(A) Sign of A Vectorization of A to produce a column vector containing all its elements perm1 (A, M, N) ∈ CMN [perm1 (A, M, N)]k,l = [A]i, j , i, j = 1, 2, . . . , MN, k = M(i − 1) − (MN − 1)i/N  + 1, l = M( j − 1) − (MN − 1) j/N  + 1 (see the appendix) perm2 (A, M, N) ∈ Cmin([M,N]) , [perm2 (A, M, N)]k,l = [A]i, j , i, j = 1, 2, . . . , MN, i = M(k − 1) + k, j = M(l − 1) + l (see the appendix) min(a) Minimum element of vector a RN With the use of (1), (2), and c, the amplitude and phase  are calculated using least estimates, denoted by A and φ, squares (LS) as A =  H  c y  N  (9) ,  φ = −∠ cH y . (10) 2.2. Linear Prediction Polynomial Approach. Following the development in [34] with the use of the property of s(n) = exp{ jω}s(n − 1), it is shown that the ML frequency estimator in the presence of Gaussian noise can be determined from minimization of the following multimodal function:  min y2 − c(1)y1 c(1) H   Σe−1 y2 − c(1)y1 , (11) where yi = [y(i), y(2 + i), . . . , y(N − 2 + i)]T , i = 1, 2, and Σe = E{eeH } ∈ CN −1 is a weighting matrix with e = [e(1), e(2), . . . , e(N − 1)]T whose element is e(n) = η(n + 1) − c(1)η(n), n = 1, 2, . . . , N − 1, and the presence of c(1) in Σe−1 makes (11) violating the convex framework. We refer the estimator of (11) to as LPP approach. Unlike the periodogram, the ML property of (11) holds for a general form of Ση . Moreover, (11) can be extended to the scenarios of multiple real or complex sinusoids. It is noteworthy that the iterative quadratic maximum likelihood (IQML) [7] technique provides a standard solution to (11) by relaxing the LPP cost function to a quadratic form, although a sufficiently close initial guess, which is usually obtained through solving (11) with Σe = IN −1 , is required to attain the global minimum. Furthermore, the LPP scheme has similarities to the linear filtering approach in [35, 36]. For simplicity but without loss of generality, here we assume that Ση = ση2 I and Σe is then of the form: ⎛⎡ ⎤⎞  ⎜⎢ ⎥⎟ Σe = Toeplitz⎝⎣2, −c(1)∗ , 0, 0, . . . , 0⎦⎠ση2 .  N −3  (12) 4 EURASIP Journal on Advances in Signal Processing Ignoring the scaling factor of ση2 in (12), the minimizer of (11) is equivalent to Defining S = ssH ∈ CN with [S]i, j = s(i)s( j)∗ = A2 c(i − j) and using (18), the constraint in (17) is equivalent to S = ssH v min v,c(1)  s.t. v = y2 − c(1)y1 H −1 Σe   y2 − c(1)y1 , (13) ⎡ (14) ⎢ ⎢ ⎢ ⎢ =⎢ ⎢ ⎢ ⎣ ⎛⎡ ⎤⎞ N −3    ⎜⎢ ⎥⎟ Σe = Toeplitz⎝⎣2, −c(1)∗ , 0, 0, . . . , 0⎦⎠. (15) By replacing (14) with Schur complement and relax the resultant expression, the SDR formulation for solving c(1) is A2 c(0) A2 c(1)∗ A2 c(1) A2 c(0) · · · A2 c(N − 1) ∗⎤ ⎥ ∗ · · · A2 c(N − 2) ⎥ ⎥ ⎥ ⎥ .. .. .. .. ⎥ ⎥ . . . . ⎦ 2 2 2 A c(N − 1) A c(N − 2) · · · A c(0)   ∗ ∗ . = Toeplitz A2 c(0), A2 c(1) , . . . , A2 c(N − 1) On the other hand, expanding the objective function of (17) yields sH Ση−1 s − yH Ση−1 s − sH Ση−1 y + yH Ση−1 y. min v,c(1) s.t. ⎣  v y2 − c(1)y1  y2 − c(1)y1 Σe ⎛⎡ H ⎤ ⎦  0N , (16)  ⎡ N −3  tr Ση−1 S − yH Ση−1 s − sH Ση−1 y min S,s ⎤⎞   ⎥⎟ ⎜⎢ ∗ Σe = Toeplitz⎝⎣2, −c(1) , 0, 0, . . . , 0⎦⎠. s.t. ⎣ S s sH 1 ⎤ ⎦  0N+1 ,  It is noteworthy that the inverse operator is removed in the Schur complement and there is a linear relation between the weighting matrix elements and the parameter estimate. As a result, tight constraints are formed automatically. After getting c(1), the estimates of frequency, amplitude and phase are computed using (8), (9), and (10), respectively. S = Toeplitz A2 c(0), A2 c(1)∗ , . . . , A2 c(N − 1)∗   2.3. Nonlinear Least Squares Approach. It is straightforward to see that the NLS estimator for the sinusoidal parameters can be expressed as the following constrained optimization problem: s,A,ω,φ  y−s H  Ση−1 y − s     s.t. s(n) = A exp j ωn + φ , (17)  s(n) − exp jωm s(n − m) = 0   ∗ =⇒ s(n)s(n − m) = A2 exp jωn = A2 c(m), m = 1, 2, . . . , N − 1. . (21) A = , (22)   S , (23)  2,1 1,1    φ = ∠ s(1) S 1,2 . (24) 3. Single Real Tone In this section, we switch to find the parameters of a noisy real tone which is another fundamental sinusoidal estimation problem. The signal model is n = 1, 2, . . . , N, where y = [y(1), y(2), . . . , y(N)]T and s = [s(1), s(2), . . . , s(N)]T . Similar to (4), the constraints, s(n) = A exp{ jω+ φ}, n = 1, 2, . . . , N, make (17) violating the convex framework. Based on the linear prediction property, the constraint of s(n) can be alternatively expressed as   Note that no constraints are imposed on A2 c(n), n = 0, 1, . . . , N − 1, and the SDR solution, namely, s, will be the estimated waveform which is a pure complex tone. The parameter estimates are easily computed as  ω =∠ S min (20) We drop the term yH Ση−1 y which takes no effect in the minimization and replace sH Ση−1 s with tr(Ση−1 S). Relaxing S = ssH as S  ssH , the SDR formulation becomes v ⎡ (19) (18) x(n) = s(n) + q(n), where n = 1, 2, . . . , N,   s(n) = A cos ωn + φ . (25) (26) The real and complex models are very similar except now ω ∈ (0, π) and the Gaussian noise q(n) is real. Similarly, the covariance matrix of the noise vector q = [q(1), q(2), . . . , q(N)]T , denoted by Σq = E{qqT }, is assumed known up to a scalar. As the periodogram is not an optimum estimator for a real tone, only the LPP and NLS approaches will be presented as follows. EURASIP Journal on Advances in Signal Processing 5 3.1. Linear Prediction Polynomial Approach. Let ρ1 = cos(ω). Utilizing s(n) − 2ρ1 s(n − 1) + s(n − 2) and [34], the LPP estimator for ρ1 is  min x3 − 2ρ1 x1 + x2 T ρ1   where xi = [x(i), x(i + 1), . . . , x(N + 3 − i)] , i = 1, 2, 3, and Σe = E{eeT } ∈ RN −2 with e = [e(1), e(2), . . . , e(N − 2)]T whose element is e(n) = q(n + 2) − 2ρ1 q(n + 1) + q(n), n = 1, 2, . . . , N − 2. Assuming that Σq = σq2 IN , a scaled version of Σe is employed:  the amplitude and phase are estimated With the use of ω, according to constrained least squares: ξ (28) The minimizer of (27) is identical to v (29) s.t. v = x3 − 2ρ1 x1 + x2 T −1 Σe  (35) s.t. ξ T ξ = A2 , N −5  (x − Bξ)T (x − Bξ) min where ⎤T ⎞   ⎥ ⎟ ⎜⎢ 2 ⎟ Σe = Toeplitz⎜ ⎝⎣2 + 4ρ1 , −4ρ1 , 1, 0, . . . , 0⎦ ⎠. v, ρ1 (34) (27) T min   ω  = cos−1 ρ1 . Σe−1 x3 − 2ρ1 x1 + x2 , ⎛⎡  is computed After obtaining ρ1 , the estimated frequency, ω, as  x3 − 2ρ1 x1 + x2 , (30) ⎤T ⎞ N −5   ⎥ ⎟ ⎜⎢ 2 ⎟ Σe = Toeplitz⎜ ⎝⎣2 + 4ρ1 , −4ρ1 , 1, 0, . . . , 0⎦ ⎠. (31) ⎡ cos(ω)  − sin(ω)  ⎤ ⎢ ⎥ ⎢ cos(2ω)  − sin(2ω)  ⎥ ⎢ ⎥ ⎢ ⎥ ⎥, B=⎢ .. .. ⎢ ⎥ ⎢ ⎥ . . ⎣ ⎦ cos(N ω)  − sin(N ω)  ⎡  ⎤ (36) A cos φ ξ=⎣   ⎦, A sin φ ⎛⎡ Now we make two relaxations. First, we relax the constraint of (30) as in (14). Second, as Σe is nonlinear in ρ1 in (31), we substitute ρ12 with a dummy variable β and include a new relaxed constraint of β ≥ ρ12 . To strengthen the constraint, we assume that the sign of ρ1 , denoted by Ψ, is known, which can be easily determined. As |ρ1 | ≤ 1, we have Ψρ1 ≥ ρ12 . As a result, a tighter constraint is Ψρ1 ≥ β ≥ ρ12 . As a result, the SDR formulation for approximating (27) is and x = [x(1), x(2), . . . , x(N)]T . By expanding the objective function of (35) and removing xT BT Bx, and introducing a dummy matrix Ξ = ξξ T , (35) is equivalent to  Ξ,ξ s.t. ⎡ s.t. ⎣ T ⎤ x3 + x2 − 2ρ1 x1 ⎦  0N −1 , Ξ = ξξ T . Applying relaxation on Ξ = ξξ T , the SDR estimator for A and φ is v x3 + x2 − 2ρ1 x1 Ξ,ξ Σe ⎛⎡ ⎤T ⎞ N −5   ⎥ ⎟ ⎜⎢ ⎟ Σe = Toeplitz⎜ ⎝⎣2 + 4β, −4ρ1 , 1, 0, . . . , 0⎦ ⎠, s.t. (32) where the matrix inverse operator that appears in (30) is removed by Schur complement representation. In our study, Ψ is obtained using Ψ = sign⎝ N n=3 ⎞ x(n − 1)[x(n) + x(n − 2)]⎠.  tr(Ξ) = A2 , ⎡ ⎣ Ψρ1 ≥ β ≥ ρ12 , ⎛  tr BT BΞ − 2xT Bξ min  Ξ ξ ξT 1 (38) ⎤ ⎦  03 , where we treat A2 as a constant. Here, we have a convex program with unique optimum without dealing with multimodality as in the Lagrangian multiplier approach. Finally, the amplitude and phase estimates are computed as ! A = tr(Ξ), " (33) (37) tr(Ξ) = A2 , v min v, ρ1 , β  tr BT BΞ − 2xT Bξ min φ = tan−1 (39) # [ξ]2 . [ξ]1 (40) 6 EURASIP Journal on Advances in Signal Processing 3.2. Nonlinear Least Squares Approach. Similar Section 2.3, the NLS estimator for the real sinusoidal is to As s is the waveform estimate which is restricted to be a real tone, the parameter estimates are sequentially computed as " min s,ω,φ,A ω  = cos−1 (x − s)T Σ−q 1 (x − s)   s.t. s(n) = A cos ωn + φ , s(n) + s(n − 2m) = 2ρm s(n − m), n = 1, 2, · · · , N, m = 1, 2, . . . , M, (42) where ρm = cos(mω) and M =  (N − 1)/2 is the maximum index for m. By considering various n in (42), the set of equalities is converted to (46) " # s(1) cos(2ω)  − s(2) cos(ω)  −1   φ = tan ,  s(1) sin(2ω)  − s(2) sin(ω)  (41) n = 1, 2, . . . , N, where x = [x(1), x(2), . . . , x(N)]T and s = [s(1), s(2), . . . , s(N)]T . Now we transform the constraint of s(n) = A cos(ωn + φ) using the general LPP property as follows: #  s(1) + s(3) ,  s(2) A =  s(1)  .  + φ cos ω (47) (48) 4. Multiple Complex Tones In this section, we generalize our development in Sections 2 and 3 to multiple complex sinusoids. Now the signal model is y(n) = s(n) + η(n), n = 1, 2, . . . , N, (49) where L   s(n) =  Am exp j ωm n + φm , (50) m=1     s(i) + s(i − 2m) s j + s j − 2m   = , s(i − m) s j −m m = 1, 2, . . . , M,  i > j = 2m + 1, 2m + 2, . . . , N    (43) =⇒ s(i)s j − m + s(i − 2m)s j − m     = s(i − m)s j + s(i − m)s j − 2m , where {ρm } and ω are eliminated. Next, we define S = ssT ∈ RN with [S]i, j = s(i)s( j) to write (43) as where Am > 0, ωm ∈ (−π, π) and φm ∈ [0, 2π) represent the amplitude, frequency, and phase of mth complex tone, respectively, and they are unknown constants with ωi = / ωj, i= / j, while η is a zero-mean complex Gaussian noise. The number of sinusoids L is assumed known. In the following, the SDR algorithm based on the LPP approach will be presented. Note that it is difficult to apply the NLS approach as there will be numerous cross-terms. Based on the linear prediction property for multiple complex tone signal, we have L am s(n − m) = 0, s(n) + (51) m=1 [S]i, j −m + [S]i−2m, j −m = [S]i−m, j + [S]i−m, j −2m , m = 1, 2, . . . , M, i > j = 2m + 1, 2m + 2, . . . , N. (44) where L a1 = − ρi1 , i1 =1 Using similar procedure as in (20) and (21), the objective function of (41) is modified as tr(Σ−q 1 S) − 2sT Σ−q 1 x. The SDR formulation is then given as  s,S ρi1 ρi2 , i1 =1 i2 =1 ·········  tr Σ−q 1 S − 2sT Σ−q 1 x min L i1 −1 a2 = am = (−1) m ⎡ ⎣ S s sT 1 ⎤ (45) ⎦  0N+1 , m = 1, 2, . . . , M, ········· aL = (−1)L i > j = 2m + 1, 2m + 2, . . . , N. ··· i1 =1 i2 =1 s.t. [S]i, j −m + [S]i−2m, j −m = [S]i−m, j + [S]i−m, j −2m , im −1 m i1 −1 L $ j =1 ρL j , m $ (52) ρi j , im−1 =1 j =1 EURASIP Journal on Advances in Signal Processing 7 and ρm = exp{ jωm }. Note that {ρm } are the roots of the polynomial with {am } as coefficients. On the other hand, the LPP estimator for {am } is ⎡ L min⎣yL+1 + { am } ⎤H m=1 ⎡ am ym ⎦ Σe−1 ⎣yL+1 + entries of A and R, which is analogous to (43). For the (L, L) entry of A, we have ⎤ L [A]L,L am ym ⎦, (53) m=1  2   L $   L  ( = aL aL =  − 1) ρ i  = 1.   i=1  ∗ For the Lth row and Lth column, the relations are T where yi = [y(i), y(i + 1), . . . , y(N + L + 1 − i)] , i = 1, 2, . . . , L + 1 are the vectors of the observed signal, and Σe = E{eeH } ∈ CN −L with e = [e(1), e(2), . . . , e(N − L)]T and its elements are defined as [A]m,L = am a∗L ⎡ = (−1) = (−1) = aL−m , Σe = Toeplitz⎝⎣ m=0 am a∗m , L m=κ m=1 min {am } ⎣yL+1 + L ⎡ am ym ⎦ Σe −1 ⎣yL+1 + m=1 s.t. Σe = Toeplitz⎝⎣ L m=0 am a∗m , L m=κ L m=1 L [A]1,1 ⎡ R := ⎣ ρρH ρ ρH 1 (61) m, k = 1, 2, . . . , L − 1. (62) (57) = 1, m min A,{am },v where ρ = [ρ1 , ρ2 , . . . , ρL ]T and a = [a1 , a2 , . . . , aL ]T . When A is introduced, the constraints will not be tight enough because the relation between entries of A and {am } is relaxed. To compensate this, we look for relationships between the (63) v ⎡ s.t. v = ⎣yL+1 + L ⎤H ⎡ am ym ⎦ Σe −1 ⎣yL+1 + m=1 ⎛⎡ L ⎤ am ym ⎦, m=1 L L [A]m,m , m=0 L (58) m = 1, 2, . . . , L. Constraints, which limit the degrees of freedom of A and R are obtained from (59)–(63). To fit (56) into convex framework, we replace ai a∗j with [A]i, j . Furthermore, by introducing a dummy variable v, we make the objective function of (53) to a constraint. As a result, the optimization problem is now Σe = Toeplitz⎝⎣ ⎤ ⎦, & diag(R, 0) ⎤ aH 1 ρi j iL−m−1 =1 j =1 m = 1, 2, . . . , L, % am a∗m−1 , . . . , ⎦, (60)  2  L  L L   ∗  [R]m,k , = a1 a1 = − ρi1   =  i1 =1  m=1 k=1 am ym ⎦ which consists of cross-terms involving {am } and resulting in a nonconvex program. We now define two dummy matrices A ∈ CL+1 and R ∈ CL+1 to replace {am } and {ρm }: A := ⎣ ρi ⎦ i=1 and all the diagonal elements of R are ⎤ (56) aaH a ··· iL−m −1 L$ −m ⎤∗ Finally, [A]1,1 and R are connected by ⎤⎞ N −2L−1   ⎥⎟ am a∗m−κ , . . . , aL a∗0 , 0, . . . , 0⎦⎠, ⎡ L−m i1 −1 = [A]L−m,L−k , (55) m=1 ⎛⎡ L $   ∗ ∗ = am a∗ = aL−m a∗ L ak aL L−k am a∗m−1 , . . . , ⎤⎞ N −2L−1   ⎥⎟ am a∗m−κ , . . . , aL a∗0 , 0, . . . , 0⎦⎠. ⎤H ρi j ⎦⎣ [A]m,k = am a∗k = am ak a∗L aL As in the estimator for a single complex tone, we rewrite the optimization problem of (53) by including (55) as ⎡ ⎤⎡ as (−1)L+m = (−1)L−m . Hence, A has the structure of Assuming that Ση = ση2 IN , Σe becomes L m $ im−1 =1 j =1 i1 =1 i2 =1 (54) L L−m n = L + 1, L + 2, . . . , N. m=1 ⎛⎡ ··· i1 =1 i2 =1 L am y(n − m), im −1 m i1 −1 L+m ⎣ e(n − L) = y(n) + (59) m=κ m=1 [A]m,m−1 , . . . , ⎤⎞ N −2L−1    ⎥⎟ [A]m,m−κ , . . . , [A]L,0 , 0, . . . , 0, ⎦⎠, a1 = ρ1 + ρ2 + · · · + ρL , (64) where the last constraint comes from the definition of a1 and we have defined [A]0,0 = 1, [A]m,0 = am and [A]0,m = a∗m , 8 EURASIP Journal on Advances in Signal Processing m = 1, 2 . . . , L. Using Schur complement representation and performing SDR relaxation, (64) becomes v min R,A,{am },{ρm },v ⎡ L ⎣yL+1 + v m=1 m=1 ⎛⎡ L Σe = Toeplitz⎝⎣ ⎥ ⎥  0N −L+1 , ⎥ ⎦ L [A]m,m , m=0 m=1 m=κ  [A]m,m−1 , . . . , m = A2m , m = 1, 2, . . . , L. We then define a matrix Ξ = ξξ H and perform relaxation on (68) to yield  Ξ,ξ,{Am } ⎤⎞   ⎥⎟ [A]m,m−κ , . . . , [A]L,0 , 0, . . . , 0⎦⎠, (68)  diag ξξ H , 0 min2 N −2L−1 L  s.t. am ym ⎦ ⎥ ⎥ Σe am ym ξ,{A2m } ⎤H ⎤ L yL+1 + ξ H BH Bξ − 2yH Bξ min ⎡ ⎢ ⎢ ⎢ s.t. ⎢ ⎢ ⎣ By expanding (66) and dropping the irrelevant terms, (66) is equivalent to s.t. %  tr BH BΞ − 2yH Bξ diag(Ξ, 0) ⎡ ⎣ Ξ ξ ξH 1 & ⎤ m = A2m , m = 1, 2, . . . , L, (69) ⎦  0L+1 , R  0L+1 , where {A2m } are treated as constants. The amplitude and phase estimates are computed as A  0L+1 , a1 = ρ1 + ρ2 + · · · + ρL , [A]m,k = [A]L−m,L−k , Am = m, k = 1, 2, . . . , L, [A]1,1 =   m = 1, 2, . . . , L, m, m = 1, 2, . . . , L. (70) (71) [R]m,k , m=1 k=1 [A]L,L = 1, % & diag(Ξ, 0) φm = ∠ [ξ]m , L L !% diag(R, 0) & m As an illustration, we consider L = 2 as follows. The linear prediction property of (51) gives = 1, m = 1, 2, . . . , L, s(n) + a1 s(n − 1) + a2 s(n − 2) = 0, (72) (65) where the last four constraints are based on (59)–(63) to limit the matrix structure. It is worthy to mention that although constraints are imposed, some entries are not well bounded which may decrease the estimation accuracy. Note also that similar to (32), the inverse matrix operator is removed from the Schur complement representation. The frequencies are estimated by finding the phases of {ρm }. After obtaining {ω  m }, {Am } and {φm } are computed similarly as in (35): min ξ,{A2m }  s.t.  H  diag ξξ , 0 m (66) = A2m , m = 1, 2, . . . , L,      ⎛⎡ ⎜⎢ ∗ = Toeplitz⎝⎣1 + a1 a∗ 2 + a2 a2 , a1 + (73) ⎤⎞ N −5   ⎥⎟ a1 a∗2 , a∗2 , 0, . . . , 0⎦⎠ ∈ CN −2 .      ξ = A1 exp jφ1 , A2 exp jφ2 , . . . , AL exp jφL &T From (59) and (62), and replacing ai a∗i with [A]i,i , i = 1, 2, we have 1 + a1 a∗1 + a2 a∗2 = 1 + [A]1,1 + [A]2,2  ⎤ exp j ω 1 exp j ω 2 L · · · exp j ω ⎢     ⎥ ⎢ exp j2ω 1 exp j2ω  2 · · · exp j2ω L ⎥ ⎢ ⎥ ⎢ ⎥ ⎥, B=⎢ .. .. .. .. ⎢ ⎥ ⎢ ⎥ . . . . ⎣ ⎦       exp jN ω  1 exp jN ω  2 · · · exp jN ω L % Σe (x − Bξ)H (x − Bξ) where ⎡ where a1 = ρ1 + ρ2 and a2 = ρ1 ρ2 . The covariance matrix is 2 (74) 2 =1+ [R]m,k + 1. m=1 k=1 From (61), we get a1 + a2 a∗1 = a1 + [A]2,1 . (67) = a1 + [A]2−2,2−1 = a1 + a∗ 1. (75) EURASIP Journal on Advances in Signal Processing 9 0 v min a1 ,a2 ,ρ1 ,ρ2 ,v  s.t. v = y3 + a1 y1 + a2 y2 H   Σe−1 y3 + a1 y1 + a2 y2 , a1 = ρ1 + ρ2 , ⎛⎡ Σe = Toeplitz⎝⎣2 + (76) 2 2 [R]m,k , a1 + m=1 k=1 ⎤⎞   ⎥⎟ a∗1 , a∗2 , 0, . . . , 0⎦⎠. N −5 Mean square amplitude error (dB) Applying (74) and (75) yields −5 −10 −15 −20 −25 −30 −35 −40 −5 0 5 Finally, the SDR algorithm for frequencies is min R,A,a1 ,a2 ,ρ1 ,ρ2 ,v ⎡ s.t. ⎣ v υ (x3 + a1 x1 + a2 x2 )H x3 + a1 x1 + a2 x2 Σe ⎤ ⎦  0N −1 , 15 20 25 NLS CRLB IQML Periodogram LPP Figure 1: MSE for A of single complex sinusoid at N = 20. A  03 , 10 Mean square frequency error (dB) R  03 , ⎛⎡ 2 ⎜⎢ 2 Σe = Toeplitz⎝⎣2 + [R]m,k , a1 + m=1k=1 ⎤⎞   ⎥⎟ a∗1 , a∗2 , 0, . . . , 0⎦⎠, N −5 a1 = ρ1 + ρ2 , % 10 SNR (dB) & diag(R, 0) m = 1, m = 1, 2. (77) Using (69)–(71), A1 , φ1 , A2 , and φ2 can then be determined. For L = 2, B and ξ have the forms of     ⎤ ⎡ exp j ω 1 exp j ω 2 ⎢   ⎥ ⎢ exp j2ω 1 exp j2ω 2 ⎥ ⎥ ⎢ ⎥ ⎢ ⎥, B=⎢ .. .. ⎥ ⎢ ⎥ ⎢ (78) . . ⎦ ⎣     exp jN ω  1 exp jN ω 2 %    ξ = A1 exp jφ1 , A2 exp jφ2 &T . It is noteworthy that the real signal model can be tackled in a similar manner by including the two additional constraints: {am } are symmetric and {ρm } are in conjugate pairs. 5. Two-Dimensional Complex Tone In this section, we will extend the periodogram and NLS approaches in Section 2 to parameter estimation of a 2D complex sinusoid in additive noise. It is straightforward to 0 −10 −20 −30 −40 −50 −60 −5 0 5 10 SNR (dB) 15 20 25 NLS CRLB IQML Periodogram LPP Figure 2: MSE for ω of single complex sinusoid at N = 20. apply the LPP approach although its formulation is more tedious. The observed 2D signal is modeled as   y(m, n) = A exp j μm + νn + φ + η(m, n),  m = 1, 2, . . . , M, n = 1, 2, . . . , N, (79) where A is the unknown amplitude, μ and ν are the unknown 2D frequencies, and φ is the initial phase while η(m, n) is a zero-mean Gaussian noise. The task is to find μ, ν, A, φ from the MN samples of { y(m, n)}. To facilitate the algorithm EURASIP Journal on Advances in Signal Processing 10 −10 5 −20 Mean square frequency error (dB) Mean square phase error (dB) 10 0 −5 −10 −15 −20 −25 0 5 10 SNR (dB) 15 20 −60 −70 −90 −5 25 0 5 10 15 20 25 SNR (dB) NLS CRLB IQML Periodogram LPP Figure 5: MSE for ω of single complex sinusoid at N = 128. Figure 3: MSE for φ of single complex sinusoid at N = 20. 20 10 10 0 Mean square phase error (dB) Mean square amplitude error (dB) −50 NLS CRLB IQML Periodogram LPP −10 −20 −30 −40 −50 −5 −40 −80 −30 −35 −5 −30 0 −10 −20 −30 −40 0 5 10 SNR (dB) 15 20 25 0 5 10 SNR (dB) 15 20 25 NLS CRLB IQML Periodogram LPP NLS CRLB IQML Periodogram LPP −50 −5 Figure 4: MSE for A of single complex sinusoid at N = 128. Figure 6: MSE for φ of single complex sinusoid at N = 128. development, we let [S]m,n = A exp{ j(μm+νn+φ)}, [Y]m,n = y(m, n) and [N]m,n = η(m, n) to rewrite (79) as N where Y(μ, ν) = M m=1 n=1 y(m, n) exp{− j(μm + νn)} is the 2D DTFT of y(m, n). By introducing a dummy matrix C which is analogous to c in the single complex tone model, the optimization problem of (81) is equivalent to Y = S + N. (80) 5.1. Periodogram Approach. Assuming that {η(m, n)} are IID or E{vec (N)H vec(N)} = ση2 IMN with unknown ση2 , the ML frequency estimates can be obtained from   2 maxY μ, ν  , μ,ν (81)  max C,μ,ν   2   vec (C)H vec(Y)    s.t. [C]m,n = exp j μm + νn . (82)
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.