Main Content

Linear Matrix Inequalities

Linear Matrix Inequalities (LMIs) and LMI techniques have emerged as powerful design tools in areas ranging from control engineering to system identification and structural design. Three factors make LMI techniques appealing:

  • A variety of design specifications and constraints can be expressed as LMIs.

  • Once formulated in terms of LMIs, a problem can be solved exactly by efficient convex optimization algorithms (see LMI Solvers).

  • While most problems with multiple constraints or objectives lack analytical solutions in terms of matrix equations, they often remain tractable in the LMI framework. This makes LMI-based design a valuable alternative to classical “analytical” methods.

See [9] for a good introduction to LMI concepts. Robust Control Toolbox™ software is designed as an easy and progressive gateway to the new and fast-growing field of LMIs:

  • For users who occasionally need to solve LMI problems, the LMI Editor and the tutorial introduction to LMI concepts and LMI solvers provide for quick and easy problem solving.

  • For more experienced LMI users, LMI Lab, offers a rich, flexible, and fully programmable environment to develop customized LMI-based tools.

LMI Features

Robust Control Toolbox LMI functionality serves two purposes:

  • Provide state-of-the-art tools for the LMI-based analysis and design of robust control systems

  • Offer a flexible and user-friendly environment to specify and solve general LMI problems (the LMI Lab)

Examples of LMI-based analysis and design tools include

  • Functions to analyze the robust stability and performance of uncertain systems with varying parameters (popov)

  • Functions to design robust control with a mix of H2, H, and pole placement objectives (h2hinfsyn)

  • Functions for synthesizing robust gain-scheduled H controllers (hinfgs)

For users interested in developing their own applications, the LMI Lab provides a general-purpose and fully programmable environment to specify and solve virtually any LMI problem. Note that the scope of this facility is by no means restricted to control-oriented applications.

Note

Robust Control Toolbox software implements state-of-the-art interior-point LMI solvers. While these solvers are significantly faster than classical convex optimization algorithms, you should keep in mind that the complexity of LMI computations can grow quickly with the problem order (number of states). For example, the number of operations required to solve a Riccati equation is o(n3) where n is the state dimension, while the cost of solving an equivalent “Riccati inequality” LMI is o(n6).

LMIs and LMI Problems

A linear matrix inequality (LMI) is any constraint of the form

A(x) := A0 + x1A1 + ... + xNAN < 0(1)

where

  • x = (x1, . . . , xN) is a vector of unknown scalars (the decision or optimization variables)

  • A0, . . . , AN are given symmetric matrices

  • < 0 stands for “negative definite,” i.e., the largest eigenvalue of A(x) is negative

Note that the constraints A(x) > 0 and A(x) < B(x) are special cases of Equation 1 since they can be rewritten as –A(x) < 0 and A(x) – B(x) < 0, respectively.

The LMI of Equation 1 is a convex constraint on x since A(y) < 0 and A(z) < 0 imply that A(y+z2)<0. As a result,

  • Its solution set, called the feasible set, is a convex subset of RN

  • Finding a solution x to Equation 1, if any, is a convex optimization problem.

Convexity has an important consequence: even though Equation 1 has no analytical solution in general, it can be solved numerically with guarantees of finding a solution when one exists. Note that a system of LMI constraints can be regarded as a single LMI since

{A1(x)<0AK(x)<0

is equivalent to

A(x):=diag(A1(x),,AK(x))<0

where diag (A1(x), . . . , AK(x)) denotes the block-diagonal matrix with
A1(x), . . . , AK(x) on its diagonal. Hence multiple LMI constraints can be imposed on the vector of decision variables x without destroying convexity.

In most control applications, LMIs do not naturally arise in the canonical form of Equation 1 , but rather in the form

L(X1, . . . , Xn) < R(X1, . . . , Xn)

where L(.) and R(.) are affine functions of some structured matrix variables X1, . . . , Xn. A simple example is the Lyapunov inequality

ATX + XA < 0(2)

where the unknown X is a symmetric matrix. Defining x1, . . . , xN as the independent scalar entries of X, this LMI could be rewritten in the form of Equation 1. Yet it is more convenient and efficient to describe it in its natural form Equation 2, which is the approach taken in the LMI Lab.

Related Topics