Linear quadratic control
This article is currently Under Construction
In control engineering and systems and control theory, linear quadratic control or LQ control refers to controller design for a deterministic linear plant (meaning that there are no elements of randomness involved) based on the minimization of a quadratic cost functional (a functional is a term for a real or complex valued function). The method is founded on the state space formalism and is a fundamental concept in linear systems and control theory.
There are two main versions of the method, depending on the setting of the control problem:
- Discrete time linear quadratic control
- Continuous time linear quadratic control
LQ control aims to find a control signal that minimizes a prescribed quadratic cost functional. In the so-called optimal regulator problem, this functional can be viewed as an abstraction of the "energy" of the overall control system and minimization of the functional corresponds to minimization of that energy.
Discrete time linear quadratic control
Plant model
In discrete time, the plant (the system to be controlled) is assumed to be linear with input and state , and evolves in discrete time k=0,1,... according to the following dynamics:
where and for all and are real matrices of the corresponding sizes (i.e., for consistency, should be of the dimension while should be of dimension ). Here is the initial state of the plant.
Cost functional
For a finite integer , called the control horizon or horizon, a real valued cost functional of a and the control sequence up to time is defined as follows:
where are given symmetric matrices (of the corresponding sizes) satisfying and is a given symmetric matrix (of the corresponding size) satisfying . In this setting, control is executed for a finite time and the horizon represents the terminal time of the control action. However, depending on some technical assumptions on the plant, it may also be possible to allow in which case one speaks of an infinite horizon.
Note that each term on the right hand side of (1) are non-negative definite quadratic terms and may be interpreted as abstract "energy" terms (e.g., as the "energy" of ). The term accounts for penalization of the control effort. This term is necessary because overly large control signals are not desirable in general; in practice this could mean that the resulting controller cannot be implemented. The final term is called the terminal cost and it penalizes the energy of the plant at the final state .
The LQ regulator problem in discrete time
The objective of LQ control is to solve the optimal regulator problem:
(Optimal regulator problem) For a given horizon K and initial state a, find a control sequence that minimizes the cost functional , that is,
over all possible control sequences .
Thus the optimal regulator problem is a type of optimal control problem and the control sequence is called an optimal control sequence.
Solution of optimal regulator problem
A standard approach to solving the discete time optimal regulator problem is by using dynamic programming. In this case, a special role is played by the so-called cost-to-go functional defined as the cost from step k to step K starting at :
Another important notion is that of the value function at step k, , defined as:
For the optimal regulator problem it can be shown that is also a quadratic function (of the variable b). It is given by:
where is a real positive definite symmetric matrix satisfying the backward recursion:
with the final condition:
.
Moreover, the infimum of for a fixed b is attainable at the optimal sequence given by:
where , , satisfies the plant dynamics with . It then follows that the minimum value of the cost functional is simply:
The validity of the above results depends crucially on the assumption that , this prevents the problem from becoming "singular". It also guarantees the uniqueness of the optimal sequence (i.e., there is exactly one sequence mimizing the cost functional (1)).