Linear quadratic control: Difference between revisions
Jump to navigation
Jump to search
imported>Hendra I. Nurdin (Stub) |
imported>Hendra I. Nurdin (subpages) |
||
Line 1: | Line 1: | ||
{{subpages}} | |||
In [[control engineering]] and [[systems theory (engineering)|systems]] and [[control theory]], '''linear quadratic control''' or LQ control refers to controller design for a deterministic [[linear system|linear]] plant based on the minimization of a quadratic cost [[functional]]. The method is based on the [[state space formalism]] and is a fundamental concept in linear systems and control theory. | In [[control engineering]] and [[systems theory (engineering)|systems]] and [[control theory]], '''linear quadratic control''' or LQ control refers to controller design for a deterministic [[linear system|linear]] plant based on the minimization of a quadratic cost [[functional]]. The method is based on the [[state space formalism]] and is a fundamental concept in linear systems and control theory. | ||
Revision as of 19:48, 6 October 2007
In control engineering and systems and control theory, linear quadratic control or LQ control refers to controller design for a deterministic linear plant based on the minimization of a quadratic cost functional. The method is based on the state space formalism and is a fundamental concept in linear systems and control theory.
There are two main versions of the method, depending on the setting of the control problem:
- Linear quadratic control in discrete time
- Linear quadratic control in continuous time
The objective of LQ control is to find a control signal that minimizes a prescribed quadratic cost functional. In the so-called regulation problem, this functional can be viewed as an abstraction of the "energy" of the overall control system and minimization of the functional corresponds to minimization of that energy.