Lyapunov Matrix Equation Solvers
Continuous-time Lyapunov Matrix Equations
MatrixEquations.lyapc
— FunctionX = lyapc(A, C)
Compute X
, the solution of the continuous Lyapunov equation
AX + XA' + C = 0,
where A
is a square real or complex matrix and C
is a square matrix. A
must not have two eigenvalues α
and β
such that α+β = 0
. The solution X
is symmetric or hermitian if C
is a symmetric or hermitian.
The following particular cases are also adressed:
X = lyapc(α*I,C) or X = lyapc(α,C)
Solve the matrix equation (α+α')X + C = 0
.
x = lyapc(α,γ)
Solve the equation (α+α')x + γ = 0
.
Example
julia> A = [3. 4.; 5. 6.]
2×2 Array{Float64,2}:
3.0 4.0
5.0 6.0
julia> C = [1. 1.; 1. 2.]
2×2 Array{Float64,2}:
1.0 1.0
1.0 2.0
julia> X = lyapc(A, C)
2×2 Array{Float64,2}:
0.5 -0.5
-0.5 0.25
julia> A*X + X*A' + C
2×2 Array{Float64,2}:
-8.88178e-16 2.22045e-16
2.22045e-16 -4.44089e-16
X = lyapc(A, E, C)
Compute X
, the solution of the generalized continuous Lyapunov equation
AXE' + EXA' + C = 0,
where A
and E
are square real or complex matrices and C
is a square matrix. The pencil A-λE
must not have two eigenvalues α
and β
such that α+β = 0
. The solution X
is symmetric or hermitian if C
is a symmetric or hermitian.
The following particular cases are also adressed:
X = lyapc(A,β*I,C) or X = lyapc(A,β,C)
Solve the matrix equation AXβ' + βXA' + C = 0
.
X = lyapc(α*I,E,C) or X = lyapc(α,E,C)
Solve the matrix equation αXE' + EXα' + C = 0
.
X = lyapc(α*I,β*I,C) or X = lyapc(α,β,C)
Solve the matrix equation (αβ'+α'β)X + C = 0
.
x = lyapc(α,β,γ)
Solve the equation (αβ'+α'β)x + γ = 0
.
Example
julia> A = [3. 4.; 5. 6.]
2×2 Array{Float64,2}:
3.0 4.0
5.0 6.0
julia> E = [ 1. 2.; 0. 1.]
2×2 Array{Float64,2}:
1.0 2.0
0.0 1.0
julia> C = [1. 1.; 1. 2.]
2×2 Array{Float64,2}:
1.0 1.0
1.0 2.0
julia> X = lyapc(A, E, C)
2×2 Array{Float64,2}:
-2.5 2.5
2.5 -2.25
julia> A*X*E' + E*X*A' + C
2×2 Array{Float64,2}:
-5.32907e-15 -2.66454e-15
-4.44089e-15 0.0
MatrixEquations.lyapcs!
— Functionlyapcs!(A,C;adj = false)
Solve the continuous Lyapunov matrix equation
op(A)X + Xop(A)' + C = 0,
where op(A) = A
if adj = false
and op(A) = A'
if adj = true
. A
is a square real matrix in a real Schur form, or a square complex matrix in a complex Schur form and C
is a symmetric or hermitian matrix. A
must not have two eigenvalues α
and β
such that α+β = 0
. C
contains on output the solution X
.
lyapcs!(A, E, C; adj = false)
Solve the generalized continuous Lyapunov matrix equation
op(A)Xop(E)' + op(E)Xop(A)' + C = 0,
where op(A) = A
and op(E) = E
if adj = false
and op(A) = A'
and op(E) = E'
if adj = true
. The pair (A,E)
is in a generalized real or complex Schur form and C
is a symmetric or hermitian matrix. The pencil A-λE
must not have two eigenvalues α
and β
such that α+β = 0
. The computed symmetric or hermitian solution X
is contained in C
.
MatrixEquations.tlyapc
— FunctionX = tlyapc(A,C, isig = 1; fast = true, atol::Real=0, rtol::Real=atol>0 ? 0 : N*ϵ)
Compute for isig = ±1
a solution of the continuous T-Lyapunov matrix equation
A*X + isig*transpose(X)*transpose(A) + C = 0
using explicit formulas based on full rank factorizations of A
(see [1] and [2]). A
and C
are m×n
and m×m
matrices, respectively, and X
is an n×m
matrix. The matrix C
must be symmetric if isig = 1
and skew-symmetric if isig = -1
. atol
and rtol
are the absolute and relative tolerances, respectively, used for rank computation. The default relative tolerance is N*ϵ
, where N = min(m,n)
and ϵ is the machine precision of the element type of A
.
The underlying rank revealing factorization of A
is the QR-factorization with column pivoting, if fast = true
(default), or the more reliable SVD-decomposition, if fast = false
.
[1] H. W. Braden. The equations A^T X ± X^T A = B. SIAM J. Matrix Anal. Appl., 20(2):295–302, 1998.
[2] C.-Y. Chiang, E. K.-W. Chu, W.-W. Lin, On the ★-Sylvester equation AX ± X^★ B^★ = C. Applied Mathematics and Computation, 218:8393–8407, 2012.
MatrixEquations.hlyapc
— FunctionX = hlyapc(A,C, isig = 1; atol::Real=0, rtol::Real=atol>0 ? 0 : N*ϵ)
Compute for isig = ±1
a solution of the continuous H-Lyapunov matrix equation
A*X + isig*adjoint(X)*adjoint(A) + C = 0
using explicit formulas based on full rank factorizations of A
(see [1] and [2]). A
and C
are m×n
and m×m
matrices, respectively, and X
is an n×m
matrix. The matrix C
must be hermitian if isig = 1
and skew-hermitian if isig = -1
. atol
and rtol
are the absolute and relative tolerances, respectively, used for rank computation. The default relative tolerance is N*ϵ
, where N = min(m,n)
and ϵ is the machine precision of the element type of A
.
The underlying rank revealing factorization of Ae
(or of A
if real) is the QR-factorization with column pivoting, if fast = true
(default), or the more reliable SVD-decomposition, if fast = false
.
[1] H. W. Braden. The equations A^T X ± X^T A = B. SIAM J. Matrix Anal. Appl., 20(2):295–302, 1998.
[2] C.-Y. Chiang, E. K.-W. Chu, W.-W. Lin, On the ★-Sylvester equation AX ± X^★ B^★ = C. Applied Mathematics and Computation, 218:8393–8407, 2012.
MatrixEquations.tulyapc!
— FunctionX = tulyapc!(U, Q; adj = false)
Compute for an upper triangular U
and a symmetric Q
an upper triangular solution X
of the continuous T-Lyapunov matrix equation
transpose(U)*X + transpose(X)*U = Q if adj = false,
or
U*transpose(X) + X*transpose(U) = Q if adj = true.
The solution X
overwrites the matrix Q
, while U
is unchanged.
If U
is nonsingular, a backward elimination method is used, if adj = false
, or a forward elimination method is used if adj = true
, by adapting the approach employed in the function dU_from_dQ!
of the DiffOpt.jl
package. For a n×n
singular U
, a least-squares upper-triangular solution X
is determined using a conjugate-gradient based iterative method applied to a suitably defined T-Lyapunov linear operator L:X -> Y
, which maps upper triangular matrices X
into upper triangular matrices Y
, and the associated matrix M = Matrix(L)
is $n(n+1)/2 \times n(n+1)/2$. In this case, the keyword argument tol
(default: tol = sqrt(eps())
) can be used to provide the desired tolerance for the accuracy of the computed solution and the keyword argument maxiter
can be used to set the maximum number of iterations (default: maxiter = 1000).
MatrixEquations.hulyapc!
— FunctionX = hulyapc!(U, Q; adj = false)
Compute for an upper triangular U
and a hermitian Q
an upper triangular solution X
of the continuous H-Lyapunov matrix equation
U'*X + X'*U = Q if adj = false,
or
U*X' + X*U' = Q if adj = true.
The solution X
overwrites the matrix Q
, while U
is unchanged.
If U
is nonsingular, a backward elimination method is used, if adj = false
, or a forward elimination method is used if adj = true
, by adapting the approach employed in the function dU_from_dQ!
of the DiffOpt.jl
package. For a n×n
singular U
, a least-squares upper-triangular solution X
is determined using a conjugate-gradient based iterative method applied to a suitably defined T-Lyapunov linear operator L:X -> Y
, which maps upper triangular matrices X
into upper triangular matrices Y
, and the associated matrix M = Matrix(L)
is $n(n+1)/2 \times n(n+1)/2$. In this case, the keyword argument tol
(default: tol = sqrt(eps())
) can be used to provide the desired tolerance for the accuracy of the computed solution and the keyword argument maxiter
can be used to set the maximum number of iterations (default: maxiter = 1000).
Discrete-time Lyapunov (Stein) Matrix Equations
MatrixEquations.lyapd
— FunctionX = lyapd(A, C)
Compute X
, the solution of the discrete Lyapunov equation
AXA' - X + C = 0,
where A
is a square real or complex matrix and C
is a square matrix. A
must not have two eigenvalues α
and β
such that αβ = 1
. The solution X
is symmetric or hermitian if C
is a symmetric or hermitian.
The following particular cases are also adressed:
X = lyapd(α*I,C) or X = lyapd(α,C)
Solve the matrix equation (αα'-1)X + C = 0
.
x = lyapd(α,γ)
Solve the equation (αα'-1)x + γ = 0
.
Example
julia> A = [3. 4.; 5. 6.]
2×2 Array{Float64,2}:
3.0 4.0
5.0 6.0
julia> C = [1. 1.; 1. 2.]
2×2 Array{Float64,2}:
1.0 1.0
1.0 2.0
julia> X = lyapd(A, C)
2×2 Array{Float64,2}:
0.2375 -0.2125
-0.2125 0.1375
julia> A*X*A' - X + C
2×2 Array{Float64,2}:
5.55112e-16 6.66134e-16
2.22045e-16 4.44089e-16
X = lyapd(A, E, C)
Compute X
, the solution of the generalized discrete Lyapunov equation
AXA' - EXE' + C = 0,
where A
and E
are square real or complex matrices and C
is a square matrix. The pencil A-λE
must not have two eigenvalues α
and β
such that αβ = 1
. The solution X
is symmetric or hermitian if C
is a symmetric or hermitian.
The following particular cases are also adressed:
X = lyapd(A,β*I,C) or X = lyapd(A,β,C)
Solve the matrix equation AXA' - βXβ' + C = 0
.
X = lyapd(α*I,E,C) or X = lyapd(α,E,C)
Solve the matrix equation αXα' - EXE' + C = 0
.
X = lyapd(α*I,β*I,C) or X = lyapd(α,β,C)
Solve the matrix equation (αα'-ββ')X + C = 0
.
x = lyapd(α,β,γ)
Solve the equation (αα'-ββ')x + γ = 0
.
Example
julia> A = [3. 4.; 5. 6.]
2×2 Array{Float64,2}:
3.0 4.0
5.0 6.0
julia> E = [ 1. 2.; 0. -1.]
2×2 Array{Float64,2}:
1.0 2.0
0.0 -1.0
julia> C = [1. 1.; 1. 2.]
2×2 Array{Float64,2}:
1.0 1.0
1.0 2.0
julia> X = lyapd(A, E, C)
2×2 Array{Float64,2}:
1.775 -1.225
-1.225 0.775
julia> A*X*A' - E*X*E' + C
2×2 Array{Float64,2}:
-2.22045e-16 -4.44089e-16
-1.33227e-15 1.11022e-15
MatrixEquations.lyapds!
— Functionlyapds!(A, C; adj = false)
Solve the discrete Lyapunov matrix equation
op(A)Xop(A)' - X + C = 0,
where op(A) = A
if adj = false
and op(A) = A'
if adj = true
. A
is in a real or complex Schur form and C
is a symmetric or hermitian matrix. A
must not have two eigenvalues α
and β
such that αβ = 1
. The computed symmetric or hermitian solution X
is contained in C
.
lyapds!(A, E, C; adj = false)
Solve the generalized discrete Lyapunov matrix equation
op(A)Xop(A)' - op(E)Xop(E)' + C = 0,
where op(A) = A
and op(E) = E
if adj = false
and op(A) = A'
and op(E) = E'
if adj = true
. The pair (A,E)
is in a generalized real or complex Schur form and C
is a symmetric or hermitian matrix. The pencil A-λE
must not have two eigenvalues α
and β
such that αβ = 1
. The computed symmetric or hermitian solution X
is contained in C
.