State Variable Feedback Controller
State variables and state-space forms can be utilized to design a required control method for linear systems in the time domain. The control action u(t) is a function of all or some state variables. When all the states are utilized to build a controller, it is referred to as full-state feedback controller. Generally, as the measurements of all the state variables are not available, we need to build an observer to estimate the states that are not directly measured.
Full-state Feedback Control Design
The full-state feedback controller employs all the state variables to place the poles of the system at the desired location. Let the following state variable model represent the linear system
where A. B, and C matrices have the same meaning as described previously in Chapter 6.
Regulation Problem: The control objective is identified as a regulation problem when we require the states to become zero beginning from any given initial state. The design also ensures the system’s internal stability while taking into account the desired transients. Assuming that the measurements of all state variables x(t) are available, the system control input for a regulation problem becomes
where К is the control gain to be designed. Using the control design in (12.2), the system in (12.1) becomes a closed-loop given as
The closed-loop system is stable if all its poles are in the left-half plane or, equivalently, if the roots of the associated characteristic equation are on the left-half plane. The characteristic equation of (12.3) is
The solution of (12.3) is given as
here x(t) —» 0 as t —t oc provided all the poles are located on the left hand side. If the system (A. B) is controllable, then we can surely obtain a K, which places the poles of the closed-loop system on the left-half plane. For additional details on controllability and observability, readers are referred to consult [88] and [89].
Tracking Problem: The control objective is identified as a tracking problem when we require the states to track a reference signal r(t). The steps to solve a tracking problem are similar to the regulation problem. Assuming that the measurements of all state variables x(f) are available, the system control input for a tracking problem becomes
Observer Design
Usually, measuring all the states is neither feasible nor practical. If the system is fully observable, one can build an observer that can estimate the states using only the limited available state measurements. One such observer is the Luenberger observer described next. For the linear system described by
the Luenberger observer is given as
where x(f) is the estimated state vector, and L is the Luenberger observer gain that one needs to be designed. The objective of the observer is to make estimation error go to zero, i.e.
The time derivative of estimation error leads to the following error dynamics
which after further simplification becomes
having the characteristic equation given as
Provided that the system (A, C) is observable, one can design an appropriate L which places the poles of the closed-loop system on the left-half plane ensuring e(t) —t 0 as t —> oo. For additional details on controllability and observability readers are directed to [88].
Full-state Feedback Controller and Observer
The state variable compensator is obtained by coupling the designed feedback controller and observer. Since we do not have all the state measurements, we utilize the estimated state x(t) in the control law as follows
But the question arises if the independently designed controller and observer would still work when combined together. In other words, would the roots of the characteristic Equations (12.4) and (12.9) still lie in the left-half plane? The answer lies in the separation principle which states that full state feedback controller and observer can be designed separately and combined.
PID Controller
PID controllers have been extensively used in manufacturing processes and factories for very long time. PID stands for proportional-integral-derivative controller, and as the name suggests, PID is a combination of three controllers (see figure 12.1):
- • Proportional Controller
- • Integral Controller
- • Derivative Controller
The three components have distinct roles addressing three different errors, making the overall controller versatile. The PID control is given as
where u(t) is the control input and e(t) is the system error, i.e. deviation from the reference trajectory r(i).
The proportional controller rectifies the current error by being “proportional to the current error” in the system. Proportional gain (Kp) decides how quickly the controller reacts and how much steady-state error exists. A high (Kp) will lead the controller to react sharply to an error, which may decrease the steady-state error but, at the same time, create a risk for system instability.

FIGURE 12.1: A block diagram of a PID controller.
The integral controller rectifies the past error, which may be accumulated by being “proportional to both the error and its duration”. Thus the integral control gain (Ki) will reduce the steady-state error, but it may come at the cost of transient response.
The derivative controller rectifies future error and provides stability to the system by slowing down the controller’s rate of change. The derivative control gain (A',/) stabilizes the system, betters the transient response, and reduces the overshoot.
There are several methods to tune the PID parameters — Kp, Ki, and Kd. These may be manual methods, the Ziegler-Nichols method, or the Cohen- Coon method. Readers interested in reading more about these methods and an in-depth explanation of PID control are referred to [90] and [91].
Optimal Control
The field of optimal control was first conceptualized in 1697, over 300 years ago. It started as a mathematical challenge by Johann Bernoulli — the Brachystochrone problem (see figure 12.2) [92]. Bernoulli’s problem statement was as follows:
“Given two points A and В in a vertical plane, what is the curve traced out by a point acted on only by gravity, which starts at A and reaches В in the shortest time”.

FIGURE 12.2: Brachystochrone problem.
Five mathematicians — Isaac Newton, Jakob Bernoulli, Gottfried Leibniz, Ehrenfried Walther von Tschirnhaus, and Guillaume de l’Hopital — each came up with their own solution which started the field of calculus of variations eventually leading to optimal control theory.
The idea behind optimal control is to find a system’s ideal performance by defining the system criteria and optimizing performance measures. A control problem consists of a cost function or objective function, system dynamics, and constraints on state and control variables. The objective is to either minimize or maximize the cost function (made up of state and control variables) subject to certain constraints. Optimal control can be used in different disciplines, including biomedical devices, communication networks, or economic .systems. Geometric or numerical approaches can be used to approach optimization problems. And, the nature of variables associated with optimization can be real numbers, integers, or combinations of both.
Performance Measure
A performance measure is a quantitative criterion that helps in evaluating the system performance under the designed control actions. The design of a performance measure is an important step in optimal control. Once the performance measure is identified, the goal of the optimal control design is to minimize or maximize that measure. Depending on the context, it may also be referred to as cost function, loss function, objective function, fitness function, utility function, or reward function — all having the same underlying meaning. The general form of performance measure is given as
where to is the start time and t/ is the end time, h is the terminal cost at the end time tj. and the function g captures the running cost.
As we will see in later chapters, optimal control is a natural technique for application in information spread, campaign management, and advertising. There are two main approaches to solve an optimal control problem — dynamic programming principle (DPP) and Pontryagin’s minimization principle. We discuss both techniques in the following sections.
Dynamic Programming and Principle of Optimality
This segment presents a brief mathematical background for one of the methods used to solve an optimal control problem. Using the concepts of dynamic programming and the principle of optimality, we obtain the Hamilton-Jacobi- Bellman (HJB) equation, a partial differential equation (PDE), which establishes the basis for optimal control theory. For a formulated optimal control problem in terms of a system’s state dynamics and an associated cost function, the concept of dynamic programming provides the solution in terms of a “value function”, which minimizes or maximizes the cost function [93].
Assume the system dynamics are represented as
where x(f) = [xi(t),x^(t), ■ ■ ■ , xn(t)]T is the state vector containing state variables and u(f) = [ui(t), uz(t), ■ ■ ■ , um(t)]T is the control vector containing control inputs.
The control objective is to find an optimal u* that drives the system dynamics (12.13) such that the cost function
is minimized.
The value function for the associated cost function is given as
and it can be proved that a solution to equation (12.15) is obtained by solving the following HJB equation:
Next, we define the Hamiltonian 'H as
then the HJB PDE can be rewritten as
where the optimal control u*(t) can be found by solving
The obtained optimal value of control u*(f) is now substituted into equation (12.18), which creates a PDE in J*. This resulting PDE in J* can be solved numerically or analytically (in special cases).
Pontryagin's Minimization Principle
Pontryagin’s minimization principle is another prominent approach to optimal control based on the calculus of variations. The principle lays down a set of necessary (nut not sufficient) conditions required for optimality. In contrast, recall that the HJB equation provided sufficient conditions. Consequently, at best, the HJB equation can only guarantee local optimality within the plausible trajectories. The minimization principle alone may not lead to the conclusion that an obtained solution trajectory is optimal. However, it is considered valuable for obtaining potential optimal trajectories in several cases. Any candidate trajectory is not optimal if it fails to satisfy the necessary conditions laid by the minimum principle.
The minimization principle is generally formulated in terms of adjoint variables and a Hamiltonian function. If the system dynamics is given by
then the optimal control u* which drives the system to minimize the cost function
follows certain necessary conditions. First we define a Hamiltonian 'H as
where p(t) denotes an n-dimensional vector of adjoint variables. Now the necessary conditions so that u* becomes the optimal control are:

and the boundary conditions are provided by

Note that the formulation is in terms of a coupled nonlinear ODE system, unlike the HJB approach, in which we are required to solve PDEs. For more rigorous treatment and detailed explanation of dynamic programming, calculus of variations, and optimal control theory, we recommend Optimal Control Theory: An Introduction by Donald E. Kirk [93]. The applications of optimal control theory are widespread, and this control technique has been applied in sustainable transportation systems [94], optimal information diffusion on social networks [95, 96], power networks, among many others.
Illustrative Example

FIGURE 12.3: Optimal time car problem.
Consider the car shown in Figure 12.3. The distance traveled by the car at time t is denoted by x(t). The maximum acceleration and deceleration are bounded between M and —M. The dynamics of the system can be represented as
or in state variable for
where x(t) and X2{t) are state variables representing the position and velocity of the car, respectively. u(t) represents the control action (acceleration or deceleration) of the car.

FIGURE 12.4: Sample trajectories.
Figure 12.4 shows possible control trajectories of the car. The question arises, which control trajectory (acceleration and deceleration profiles) does one choose? The answer lies in what is your control objective you selected. Next,, we describe a few standard control objectives and their corresponding performance measure.
Objective 1: To drive the car from point A to point В in the minimum amount of time. This is a minimum-time problem and its performance measure can be described as
i.e. J = tf, where tj is the time car reaches point B.
Objective 2: To drive the car from point A to point В with minimum fuel expenditure. This is a minimum-control problem and its performance measure can be described as
where we assume that acceleration and deceleration are directly proportional to the amount of fuel consumed by the car.
Objective 3: To drive the car from point A to point В in the minimum time while minimizing the fuel expenditure. This is a mixed-control problem and its performance measure can be described as
where Л is the weight assigned to minimum time component of the performance measure, whereas R is the weight assigned to the minimum control component of the performance measure.
More formally, consider an optimal control problem to drive the system described by
with control constraints as —M < u(t) < M, in minimum time from point A to point В, from rest to rest, i.e. a,’2(0) = 0 and x2{tf) = 0.
It can be derived that the optimal trajectories u*(t), x(t), and x*2(t) are as shown in Figure 12.5. On a closer look, notice that the u*(t) trajectory means that to drive the car to the destination in minimum time, one needs to apply maximum throttle (u(t) = M) during the first half of the journey and then apply the maximum amount of brakes (u(t) = —M) during the last half of the journey. Note that this driving strategy is time optimal but not control optimal, i.e. it will consume a significantly larger amount of fuel!

FIGURE 12.5: Optimal trajectories.
Exercises
1. The second order mechanical system has the state space form as
- a) Check the observability of the system.
- b) Design the Luenberger observer for the system so that the closed loop poles are placed at —1 and —2.
- c) Write the observer equation using obtained observer gains in the previous part.
- 2. A system dynamics is given as
In order to place the closed loop poles at S = — 1 ± 3j find the required state variable feedback assuming that complete state vector is available.
3. Consider the system
Using ctrb and obsv functions in MATLAB, show that the system is both controllable and observable.
4. Find a feedback gain matrix К so that the closed poles of the system described by
are located at Si = —1 and S-2 = —2. Use state feedback control law as u(t) = —Kx(t).
5. The dynamics of a system are given as:
and the cost function to be minimized is
Optimal feedback solution is to be found by using Pontryagin’s Minimization Principle. Admissible states and controls are not bounded. X(0) = [0 0]' and X{2) = [5 2]'.
- a) Find the necessary conditions that must be satisfied. (Obtain state equations and co-state equations).
- b) Try to solve analytically using the necessary conditions and boundary values.
- c) Develop a MATLAB code to solve the new ODE system symbolically using syms and dsolve functions.
- d) Compare the results obtained in part (b) and (c) by plotting relevant state trajectories.
- 6. Given the system dynamics
and the cost function to be minimized as
Using the HJB approach, find the optimal control U*(t) expressed as a function of X(t), t, and