The Full Wiki

Sliding mode control: Wikis


Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.


From Wikipedia, the free encyclopedia

Figure 1: Phase plane trajectory of a system being stabilized by a sliding mode controller. After the initial reaching phase, the system states "slides" along the line s = 0. The particular s = 0 surface is chosen because it has desirable reduced-order dynamics when constrained to it. In this case, the s=x_1 +\dot{x}_1 = 0 surface corresponds to the first-order LTI system \dot{x}_1 = -x_1, which has an exponentially stable origin.

In control theory, sliding mode control, or SMC, is a form of variable structure control (VSC). It is a nonlinear control method that alters the dynamics of a nonlinear system by application of a high-frequency switching control. The state-feedback control law is not a continuous function of time. Instead, it switches from one continuous structure to another based on the current position in the state space. Hence, sliding mode control is a variable structure control method. The multiple control structures are designed so that trajectories always move toward a switching condition, and so the ultimate trajectory will not exist entirely within one control structure. Instead, the ultimate trajectory will slide along the boundaries of the control structures. The motion of the system as it slides along these boundaries is called a sliding mode[1] and the geometrical locus consisting of the boundaries is called the sliding (hyper)surface. Figure 1 shows an example trajectory of a system under sliding mode control. The sliding surface is described by s = 0, and the sliding mode along the surface commences after the finite time when system trajectories have reached the surface. In the context of modern control theory, any variable structure system, like a system under SMC, may be viewed as a special case of a hybrid dynamical system.

Intuitively, sliding mode control uses practically infinite gain to force the trajectories of a dynamic system to slide along the restricted sliding mode subspace. Trajectories from this reduced-order sliding mode have desirable properties (e.g., the system naturally slides along it until it comes to rest at a desired equilibrium). The main strength of sliding mode control is its robustness. Because the control can be as simple as a switching between two states (e.g., "on"/"off" or "forward"/"reverse"), it need not be precise and will not be sensitive to parameter variations that enter into the control channel. Additionally, because the control law is not a continuous function, the sliding mode can be reached in finite time (i.e., better than asymptotic behavior). Under certain common conditions, optimality requires the use of bang–bang control; hence, sliding mode control describes the optimal controller for a broad set of dynamic systems.

One application of sliding mode controllers is the control of electric drives operated by switching power converters.[2]:"Introduction" Because of the discontinuous operating mode of those converters, a discontinuous sliding mode controller is a natural implementation choice over continuous controllers that may need to be applied by means of pulse-width modulation or a similar technique[nb 1] of applying a continuous signal to an output that can only take discrete states.

Sliding mode control must be applied with more care than other forms of nonlinear control that have more moderate control action. In particular, because actuators have delays and other imperfections, the hard sliding-mode-control action can lead to chatter, energy loss, plant damage, and excitation of unmodeled dynamics.[3]:554–556 Continuous control design methods are not as susceptible to these problems and can be made to mimic sliding-mode controllers.[3]:556–563


Control scheme

Consider a nonlinear dynamical system described by

 \dot{\mathbf{x}}(t)=f(\mathbf{x},t) + B(\mathbf{x},t)\,\mathbf{u}(t)


\mathbf{x}(t) \triangleq \begin{bmatrix}x_1(t)\\x_2(t)\\\vdots\\x_{n-1}(t)\\x_n(t)\end{bmatrix} \in \mathbb{R}^n

is an n-dimensional state vector and

\mathbf{u}(t) \triangleq \begin{bmatrix}u_1(t)\\u_2(t)\\\vdots\\u_{m-1}(t)\\u_m(t)\end{bmatrix} \in \mathbb{R}^m

is an m-dimensional input vector that will be used for state feedback. The functions f: \mathbb{R}^n \times \mathbb{R} \mapsto \mathbb{R}^n and B: \mathbb{R}^n \times \mathbb{R} \mapsto \mathbb{R}^{n \times m} are assumed to be continuous and sufficiently smooth so that the Picard–Lindelöf theorem can be used to guarantee that solution \mathbf{x}(t) to Equation (1) exists and is unique.

A common task is to design a state-feedback control law \mathbf{u}(\mathbf{x}(t)) (i.e., a mapping from current state \mathbf{x}(t) at time t to the input \mathbf{u}) to stabilize the dynamical system in Equation (1) around the origin \mathbf{x} = [0, 0, \ldots, 0]^{\text{T}}. That is, under the control law, whenever the system is started away from the origin, it will return to it. For example, the component x1 of the state vector \mathbf{x} may represent the difference some output is away from a known signal (e.g., a desirable sinusoidal signal); if the control \mathbf{u} can ensure that x1 quickly returns to x1 = 0, then the output will track the desired sinusoid. In sliding-mode control, the designer knows that the system behaves desirably (e.g., it has a stable equilibrium) provided that it is constrained to a subspace of its configuration space. Sliding mode control forces the system trajectories into this subspace and then holds them there so that they slide along it. This reduced-order subspace is referred to as a sliding (hyper)surface, and when closed-loop feedback forces trajectories to slide along it, it is referred to as a sliding mode of the closed-loop system. Trajectories along this subspace can be likened to trajectories along eigenvectors (i.e., modes) of LTI systems; however, the sliding mode is enforced by creasing the vector field with high-gain feedback. Like a marble rolling along a crack, trajectories are confined to the sliding mode.

The sliding-mode control scheme involves

  1. Selection of a hypersurface or a manifold (i.e., the sliding surface) such that the system trajectory exhibits desirable behavior when confined to this manifold.
  2. Finding feedback gains so that the system trajectory intersects and stays on the manifold.

Because sliding mode control laws are not continuous, it has the ability to drive trajectories to the sliding mode in finite time (i.e., stability of the sliding surface is better than asymptotic). However, once the trajectories reach the sliding surface, the system takes on the character of the sliding mode (e.g., the origin \mathbf{x}=\mathbf{0} may only have asymptotic stability on this surface).

The sliding-mode designer picks a switching function \sigma: \mathbb{R}^n \mapsto \mathbb{R}^m that represents a kind of "distance" that the states \mathbf{x} are away from a sliding surface.

  • A state \mathbf{x} that is outside of this sliding surface has \sigma(\mathbf{x}) \neq 0.
  • A state that is on this sliding surface has \sigma(\mathbf{x}) = 0.

The sliding-mode-control law switches from one state to another based on the sign of this distance. So the sliding-mode control acts like a stiff pressure always pushing in the direction of the sliding mode where \sigma(\mathbf{x}) = 0. Desirable \mathbf{x}(t) trajectories will approach the sliding surface, and because the control law is not continuous (i.e., it switches from one state to another as trajectories move across this surface), the surface is reached in finite time. Once a trajectory reaches the surface, it will slide along it and may, for example, move toward the \mathbf{x} = \mathbf{0} origin. So the switching function is like a topographic map with a contour of constant height along which trajectories are forced to move.

The sliding (hyper)surface is of dimension n \times m where n is the number of states in \mathbf{x} and m is the number of input signals (i.e., control signals) in \mathbf{u}. For each control index 1 \leq k \leq m, there is an n \times 1 sliding surface given by

 \left\{ \mathbf{x} \in \mathbb{R}^n : \sigma_k(\mathbf{x}) = 0 \right\}

The vital part of VSC design is to choose a control law so that the sliding mode (i.e., this surface given by \sigma(\mathbf{x})=\mathbf{0}) exists and is reachable along system trajectories. The principle of sliding mode control is to forcibly constrain the system, by suitable control strategy, to stay on the sliding surface on which the system will exhibit desirable features. When the system is constrained by the sliding control to stay on the sliding surface, the system dynamics are governed by reduced-order system obtained from Equation (2).

To force the system states \mathbf{x} to satisfy \sigma(\mathbf{x}) = \mathbf{0}, one must:

  1. Ensure that the system is capable of reaching \sigma(\mathbf{x}) = \mathbf{0} from any initial condition
  2. Having reached \sigma(\mathbf{x})=\mathbf{0}, the control action is capable of maintaining the system at \sigma(\mathbf{x})=\mathbf{0}

Existence of closed-loop solutions

Note that because the control law is not continuous, it is certainly not locally Lipschitz continuous, and so existence and uniqueness of solutions to the closed-loop system is not guaranteed by the Picard–Lindelöf theorem. Thus the solutions are to be understood in the Filippov sense[4][1]. Roughly speaking, the resulting closed-loop system moving along \sigma(\mathbf{x}) = \mathbf{0} is approximated by the smooth dynamics \dot{\sigma}(\mathbf{x}) = \mathbf{0}; however, this smooth behavior may not be truly realizable. Similarly, high-speed pulse-width modulation or delta-sigma modulation produces outputs that only assume two states, but the effective output swings through a continuous range of motion. These complications can be avoided by using a different nonlinear control design method that produces a continuous controller. In some cases, sliding-mode control designs can be approximated by other continuous control designs.[3]

Theoretical foundation

The following theorems form the foundation of variable structure control.

Theorem 1: Existence of Sliding Mode

Consider a Lyapunov function candidate


where \|\mathord{\cdot}\| is the Euclidean norm (i.e, \|\sigma(\mathbf{x})\|_2 is the distance away from the sliding manifold where \sigma(\mathbf{x})=\mathbf{0}). For the system given by Equation (1) and the sliding surface given by Equation (2), a sufficient condition for the existence of a sliding mode is that

 \underbrace{ \overbrace{\sigma^{\text{T}}}^{\tfrac{\partial V}{\partial \sigma}} \overbrace{\dot{\sigma}}^{\tfrac{\operatorname{d} \sigma}{\operatorname{d} t}} }_{\tfrac{\operatorname{d}V}{\operatorname{d}t}} < 0 \qquad \text{(i.e., } \tfrac{\operatorname{d}V}{\operatorname{d}t} < 0 \text{)}

in a neighborhood of the surface given by \sigma(\mathbf{x})=0.

Roughly speaking (i.e., for the scalar control case when m = 1), to achieve \sigma^{\text{T}} \dot{\sigma} < 0, the feedback control law  u(\mathbf{x}) is picked so that σ and \dot{\sigma} have opposite signs. That is,

  • u(\mathbf{x}) makes \dot{\sigma}(\mathbf{x}) negative when \sigma(\mathbf{x}) is positive.
  • u(\mathbf{x}) makes \dot{\sigma}(\mathbf{x}) positive when \sigma(\mathbf{x}) is negative.

Note that

\dot{\sigma} = \frac{\partial \sigma}{\partial \mathbf{x}} \overbrace{\dot{\mathbf{x}}}^{\tfrac{\operatorname{d} \mathbf{x}}{\operatorname{d} t}} = \frac{\partial \sigma}{\partial \mathbf{x}} \overbrace{\left( f(\mathbf{x},t) + B(\mathbf{x},t) \mathbf{u} \right)}^{\dot{\mathbf{x}}}

and so the feedback control law \mathbf{u}(\mathbf{x}) has a direct impact on \dot{\sigma}.

Reachability: Attaining sliding manifold in finite time

To ensure that the sliding mode \sigma(\mathbf{x})=\mathbf{0} is attained in finite time, \operatorname{d}V/{\operatorname{d}t} must be more strongly bounded away from zero. That is, if it vanishes too quickly, the attraction to the sliding mode will only be asymptotic. To ensure that the sliding mode is entered in finite time,[5]

\frac{\operatorname{d}V}{\operatorname{d}t} \leq -\mu (\sqrt{V})^{\alpha}

where μ > 0 and 0 < \alpha \leq 1 are constants.

Explanation by comparison lemma

This condition ensures that for the neighborhood of the sliding mode V \in [0,1],

\frac{\operatorname{d}V}{\operatorname{d}t} \leq -\mu (\sqrt{V})^{\alpha} \leq -\mu \sqrt{V}.

So, for V \in (0,1],

\frac{ 1 }{ \sqrt{V} } \frac{\operatorname{d}V}{\operatorname{d}t} \leq -\mu,

which, by the chain rule (i.e., \operatorname{d}W/{\operatorname{d}t} with W \triangleq 2 \sqrt{V}), means

\mathord{\underbrace{D^+ \Bigl( \mathord{\underbrace{2 \mathord{\overbrace{\sqrt{V}}^{ {} \propto \|\sigma\|_2}}}_{W}} \Bigr)}_{D^+ W \, \triangleq \, \mathord{\text{Upper right-hand } \dot{W}}}} = \frac{ 1 }{ \sqrt{V} } \frac{\operatorname{d}V}{\operatorname{d}t} \leq -\mu

where D + is the upper right-hand derivative of 2 \sqrt{V} and the symbol \propto denotes proportionality. So, by comparison to the curve z(t) = z0 − μt which is represented by differential equation \dot{z} = -\mu with initial condition z(0) = z0, it must be the case that 2 \sqrt{V(t)} \leq V_0 - \mu t for all t. Moreover, because \sqrt{V} \geq 0, \sqrt{V} must reach \sqrt{V}=0 in finite time, which means that V must reach V = 0 (i.e., the system enters the sliding mode) in finite time.[3] Because \sqrt{V} is proportional to the Euclidean norm \|\mathord{\cdot}\|_2 of the switching function σ, this result implies that the rate of approach to the sliding mode must be firmly bounded away from zero.

Consequences for sliding mode control

In the context of sliding mode control, this condition means that

 \underbrace{ \overbrace{\sigma^{\text{T}}}^{\tfrac{\partial V}{\partial \sigma}} \overbrace{\dot{\sigma}}^{\tfrac{\operatorname{d} \sigma}{\operatorname{d} t}} }_{\tfrac{\operatorname{d}V}{\operatorname{d}t}} \leq -\mu ( \mathord{\overbrace{\| \sigma \|_2}^{\sqrt{V}}} )^{\alpha}

where \|\mathord{\cdot}\| is the Euclidean norm. For the case when switching function σ is scalar valued, the sufficient condition becomes

 \sigma \dot{\sigma} \leq -\mu |\sigma|^{\alpha} .

Taking α = 1, the scalar sufficient condition becomes

 \operatorname{sign}(\sigma) \dot{\sigma} \leq -\mu

which is equivalent to the condition that

 \operatorname{sign}(\sigma) \neq \operatorname{sign}(\dot{\sigma}) \qquad \text{and} \qquad |\dot{\sigma}| \geq \mu > 0.

That is, the system should always be moving toward the switching surface σ = 0, and its speed |\dot{\sigma}| toward the switching surface should have a non-zero lower bound. So, even though σ may become vanishingly small as \mathbf{x} approaches the \sigma(\mathbf{x})=\mathbf{0} surface, \dot{\sigma} must always be bounded firmly away from zero. To ensure this condition, sliding mode controllers are discontinuous across the σ = 0 manifold; they switch from one non-zero value to another as trajectories cross the manifold.

Theorem 2: Region of Attraction

For the system given by Equation (1) and sliding surface given by Equation (2), the subspace for which the \{ \mathbf{x} \in \mathbb{R}^n : \sigma(\mathbf{x})=\mathbf{0} \} surface is reachable is given by

\{ \mathbf{x} \in \mathbb{R}^n : \sigma^{\text{T}}(\mathbf{x})\dot{\sigma}(\mathbf{x}) < 0 \}

That is, when initial conditions come entirely from this space, the Lyapunov function candidate V(σ) is a Lyapunov function and \mathbf{x} trajectories are sure to move toward the sliding mode surface where \sigma( \mathbf{x} ) = \mathbf{0}. Moreover, if the reachability conditions from Theorem 1 are satisfied, the sliding mode will enter the region where \dot{V} is more strongly bounded away from zero in finite time. Hence, the sliding mode σ = 0 will be attained in finite time.

Theorem 3: Sliding Motion


 \frac{\partial \sigma}{\partial{\mathbf{x}}} B(\mathbf{x},t)

be nonsingular. That is, the system has a kind of controllability that ensures that there is always a control that can move a trajectory to move closer to the sliding mode. Then, once the sliding mode where  \sigma(\mathbf{x}) = \mathbf{0} is achieved, the system will stay on that sliding mode. Along sliding mode trajectories, \sigma(\mathbf{x}) is constant, and so sliding mode trajectories are described by the differential equation

\dot{\sigma} = \mathbf{0}.

If an \mathbf{x}-equilibrium is stable with respect to this differential equation, then the system will slide along the sliding mode surface toward the equilibrium.

The equivalent control law on the sliding mode can be found by solving


for the equivalent control law \mathbf{u}(\mathbf{x}). That is,

 \frac{\partial \sigma}{\partial \mathbf{x}} \overbrace{\left( f(\mathbf{x},t) + B(\mathbf{x},t) \mathbf{u} \right)}^{\dot{\mathbf{x}}} = \mathbf{0}

and so the equivalent control

\mathbf{u} = -\left( \frac{\partial \sigma}{\partial \mathbf{x}} B(\mathbf{x},t) \right)^{-1} \frac{\partial \sigma}{\partial \mathbf{x}} f(\mathbf{x},t)

That is, even though the actual control \mathbf{u} is not continuous, the rapid switching across the sliding mode where \sigma(\mathbf{x})=\mathbf{0} forces the system to act as if it were driven by this continuous control.

Likewise, the system trajectories on the sliding mode behave as if

\dot{\mathbf{x}} = \overbrace{f(\mathbf{x},t) - B(\mathbf{x},t) \left( \frac{\partial \sigma}{\partial \mathbf{x}} B(\mathbf{x},t) \right)^{-1} \frac{\partial \sigma}{\partial \mathbf{x}} f(\mathbf{x},t)}^{f(\mathbf{x},t) + B(\mathbf{x},t) u} = f(\mathbf{x},t)\left( \mathbf{I} - B(\mathbf{x},t) \left( \frac{\partial \sigma}{\partial \mathbf{x}} B(\mathbf{x},t) \right)^{-1} \frac{\partial \sigma}{\partial \mathbf{x}} \right)

The resulting system matches the sliding mode differential equation

\dot{\sigma}(\mathbf{x}) = \mathbf{0}

and so as long as the sliding mode surface where \sigma(\mathbf{x})=\mathbf{0} is stable (in the sense of Lyapunov), the system can be assumed to follow the simpler \dot{\sigma} = 0 condition after some initial transient during the period while the system finds the sliding mode. The same motion is approximately maintained provided the equality  \sigma(\mathbf{x}) = \mathbf{0} only approximately holds.

It follows from these theorems that the sliding motion is invariant (i.e., insensitive) to sufficiently small disturbances entering the system through the control channel. That is, as long as the control is large enough to ensure that \sigma^{\text{T}} \dot{\sigma} < 0 and \dot{\sigma} is uniformly bounded away from zero, the sliding mode will be maintained as if there was no disturbance. The invariance property of sliding mode control to certain disturbances and model uncertainties is its most attractive feature; it is strongly robust.

As discussed in an example below, a sliding mode control law can keep the constraint

 \dot{x} + x = 0

in order to asymptotically stabilize any system of the form

 \ddot{x}=a(t,x,\dot{x}) + u

when a(\cdot) has a finite upper bound. In this case, the sliding mode is where

\dot{x} = -x

(i.e., where \dot{x}+x=0). That is, when the system is constrained this way, it behaves like a simple stable linear system, and so it has a globally exponentially stable equilibrium at the (x,\dot{x})=(0,0) origin.

Control design examples

  • Consider a plant described by Equation (1) with single input u (i.e., m = 1). The switching function is picked to be the linear combination
 \sigma(\mathbf{x}) \triangleq s_1 x_1 + s_2 x_2 + \cdots + s_{n-1} x_{n-1} + s_n x_n
where the weight si > 0 for all 1 \leq i \leq n. The sliding surface is the simplex where \sigma(\mathbf{x})=0. When trajectories are forced to slide along this surface,
\dot{\sigma}(\mathbf{x}) = 0
and so
s_1 \dot{x}_1 + s_2 \dot{x}_2 + \cdots + s_{n-1} \dot{x}_{n-1} + s_n \dot{x}_n = 0
which is a reduced-order system (i.e., the new system is of order n − 1 because the system is constrained to this (n − 1)-dimensional sliding mode simplex). This surface may have favorable properties (e.g., when the plant dynamics are forced to slide along this surface, they move toward the origin \mathbf{x}=\mathbf{0}). Taking the derivative of the Lyapunov function in Equation (3), we have
 \dot{V}(\sigma(\mathbf{x})) = \overbrace{\sigma(\mathbf{x})^{\text{T}}}^{\tfrac{\partial \sigma}{\partial \mathbf{x}}} \overbrace{\dot{\sigma}(\mathbf{x})}^{\tfrac{\operatorname{d} \sigma}{\operatorname{d} t}}
To ensure \dot{V} is a negative-definite function (i.e., \dot{V} < 0 for Lyapunov stability of the surface \mathbf{\sigma}=0), the feedback control law u(\mathbf{x}) must be chosen so that
\begin{cases} \dot{\sigma} < 0 &\text{if } \sigma > 0\ \dot{\sigma} > 0 &\text{if } \sigma < 0 \end{cases}
Hence, the product \sigma \dot{\sigma} < 0 because it is the product of a negative and a positive number. Note that
\dot{\sigma}(\mathbf{x}) = \overbrace{\frac{\partial{\sigma(\mathbf{x})}}{\partial{\mathbf{x}}} \dot{\mathbf{x}}}^{\dot{\sigma}(\mathbf{x})} = \frac{\partial{\sigma(\mathbf{x})}}{\partial{\mathbf{x}}} \overbrace{\left( f(\mathbf{x},t) + B(\mathbf{x},t) u \right)}^{\dot{\mathbf{x}}} = \overbrace{[s_1, s_2, \ldots, s_n]}^{\frac{\partial{\sigma(\mathbf{x})}}{\partial{\mathbf{x}}}} \underbrace{\overbrace{\left( f(\mathbf{x},t) + B(\mathbf{x},t) u \right)}^{\dot{\mathbf{x}}}}_{\text{( i.e., an } n \times 1 \text{ vector )}}
The control law u(\mathbf{x}) is chosen so that
u(\mathbf{x}) = \begin{cases} u^+(\mathbf{x}) &\text{if } \sigma(\mathbf{x}) > 0 \\ u^-(\mathbf{x}) &\text{if } \sigma(\mathbf{x}) < 0 \end{cases}
  • u^+(\mathbf{x}) is some control (e.g., possibly extreme, like "on" or "forward") that ensures Equation (5) (i.e., \dot{\sigma}) is negative at \mathbf{x}
  • u^-(\mathbf{x}) is some control (e.g., possibly extreme, like "off" or "reverse") that ensures Equation (5) (i.e., \dot{\sigma}) is positive at \mathbf{x}
The resulting trajectory should move toward the sliding surface where \sigma(\mathbf{x})=0. Because real systems have delay, sliding mode trajectories often chatter back and forth along this sliding surface (i.e., the true trajectory may not smoothly follow \sigma(\mathbf{x})=0, but it will always return to the sliding mode after leaving it).
which can be expressed in a 2-dimensional state space (with x1 = x and x_2 = \dot{x}) as
 \begin{cases} \dot{x}_1 = x_2\ \dot{x}_2 = a(t,x_1,x_2) + u \end{cases}
Also assume that \sup\{ |a(\cdot)| \} \leq k (i.e., | a | has a finite upper bound k that is known). For this system, choose the switching function
\sigma(x_1,x_2)= x_1 + x_2 = x + \dot{x}
By the previous example, we must choose the feedback control law u(x,\dot{x}) so that \sigma \dot{\sigma} < 0. Here,
\dot{\sigma} = \dot{x}_1 + \dot{x}_2 = \dot{x} + \ddot{x} = \dot{x}\,+\,\overbrace{a(t,x,\dot{x})+ u}^{\ddot{x}}
  • When x + \dot{x} < 0 (i.e., when σ < 0), to make \dot{\sigma} > 0, the control law should be picked so that u > |\dot{x} + a(t,x,\dot{x})|
  • When x + \dot{x} > 0 (i.e., when σ > 0), to make \dot{\sigma} < 0, the control law should be picked so that u < -|\dot{x} + a(t,x,\dot{x})|
However, by the triangle inequality,
|\dot{x}| + |a(t,x,\dot{x})| \geq |\dot{x} + a(t,x,\dot{x})|
and by the assumption about | a | ,
|\dot{x}| + k + 1 > |\dot{x}| + |a(t,x,\dot{x})|
So the system can be feedback stabilized (to return to the sliding mode) by means of the control law
u(x,\dot{x}) = \begin{cases} |\dot{x}| + k + 1 &\text{if } \underbrace{x + \dot{x}} < 0,\ -\left(|\dot{x}| + k + 1\right) &\text{if } \overbrace{x + \dot{x}}^{\sigma} > 0 \end{cases}
which can be expressed in closed form as
u(x,\dot{x}) = -(|\dot{x}|+k+1) \underbrace{\operatorname{sign}(\overbrace{\dot{x}+x}^{\sigma})}_{\text{(i.e., tests } \sigma > 0 \text{)}}
Assuming that the system trajectories are forced to move so that \sigma(\mathbf{x})=0, then
\dot{x} = -x \qquad \text{(i.e., } \sigma(x,\dot{x}) = x + \dot{x} = 0 \text{)}
So once the system reaches the sliding mode, the system's 2-dimensional dynamics behave like this 1-dimensional system, which has a globally exponentially stable equilibrium at (x,\dot{x})=(0,0).

Sliding mode observer

Sliding mode control can be used in the design of state observers. These non-linear high-gain observers have the ability to bring coordinates of the estimator error dynamics to zero in finite time. Additionally, switched-mode observers have attractive measurement noise resilience that is similar to a Kalman filter.[6][7] For simplicity, the example here uses a traditional sliding mode modification of a Luenberger observer for an LTI system. In these sliding mode observers, the order of the observer dynamics are reduced by one when the system enters the sliding mode. In this particular example, the estimator error for a single estimated state is brought to zero in finite time, and after that time the other estimator errors decay exponentially to zero. However, as first described by Drakunov,[8] a sliding mode observer for non-linear systems can be built that brings the estimation error for all estimated states to zero in an finite (and arbitrarily small) time.

Here, consider the LTI system

\begin{align} \dot{\mathbf{x}} &= A \mathbf{x} + B \mathbf{u}\\y &= \begin{bmatrix}1 & 0 & 0 & \cdots & \end{bmatrix} \mathbf{x} = x_1 \end{align}

where state vector \mathbf{x} \triangleq (x_1, x_2, \dots, x_n) \in \mathbb{R}^n, \mathbf{u} \triangleq (u_1, u_2, \dots, u_r) \in \mathbb{R}^r is a vector of inputs, and output y is a scalar equal to the first state of the \mathbf{x} state vector. Let

A \triangleq \begin{bmatrix} a_{11} & A_{12} \\ A_{21} & A_{22} \end{bmatrix}


  • a11 is a scalar representing the influence of the first state x1 on itself,
  • A_{21} \in \mathbb{R}^{(n-1)} is a column vector representing the influence of the other states on the first state,
  • A_{22} \in \mathbb{R}^{(n-1) \times (n-1)} is a matrix representing the influence of the other states on themselves, and
  • A_{12} \in \mathbb{R}^{1\times(n-1)} is a row vector corresponding to the influence of the first state on the other states.

The goal is to design a high-gain state observer that estimates the state vector \mathbf{x} using only information from the measurement y = x1. Hence, let the vector \hat{\mathbf{x}} = (\hat{x}_1,\hat{x}_2,\dots,\hat{x}_n) \in \mathbb{R}^n be the estimates of the n states. The observer takes the form

\dot{\hat{\mathbf{x}}} = A \hat{\mathbf{x}} + B \mathbf{u} + L v(\hat{x}_1 - x_1)

where v: \R \mapsto \R is a nonlinear function of the error between estimated state \hat{x}_1 and the output y = x1, and L \in \mathbb{R}^n is an observer gain vector that serves a similar purpose as in the typical linear Luenberger observer. Likewise, let

L = \begin{bmatrix} -1 \\ L_{2} \end{bmatrix}

where L_2 \in \mathbb{R}^{(n-1)} is a column vector. Additionally, let \mathbf{e} = (e_1, e_2, \dots, e_n) \in \mathbb{R}^n be the state estimator error. That is, \mathbf{e} = \hat{\mathbf{x}} - \mathbf{x}. The error dynamics are then

\begin{align} \dot{\mathbf{e}} &= \dot{\hat{\mathbf{x}}} - \dot{\mathbf{x}}\ &= A \hat{\mathbf{x}} + B \mathbf{u} + L v(\hat{x}_1 - x_1) - A \mathbf{x} - B \mathbf{u}\ &= A (\hat{\mathbf{x}} - \mathbf{x}) + L v(\hat{x}_1 - x_1)\ &= A \mathbf{e} + L v(e_1) \end{align}

where e_1 = \hat{x}_1 - x_1 is the estimator error for the first state estimate. The nonlinear control law v can be designed to enforce the sliding manifold

0 = \hat{x}_1 - x_1

so that estimate \hat{x}_1 tracks the real state x1 after some finite time (i.e., \hat{x}_1 = x_1). Hence, the sliding mode control switching function

\sigma(\hat{x}_1,\hat{x}) \triangleq e_1 = \hat{x}_1 - x_1.

To attain the sliding manifold, \dot{\sigma} and σ must always have opposite signs (i.e., \sigma \dot{\sigma} < 0 for essentially all \mathbf{x}). However,

 \dot{\sigma} = \dot{e}_1 = a_{11} e_1 + A_{12} \mathbf{e}_2 - v( e_1 ) = a_{11} e_1 + A_{12} \mathbf{e}_2 - v( \sigma )

where \mathbf{e}_2 \triangleq (e_2, e_3, \ldots, e_n) \in \mathbb{R}^{(n-1)} is the collection of the estimator errors for all of the unmeasured states. To ensure that \sigma \dot{\sigma} < 0, let

v( \sigma ) = M \operatorname{sign}(\sigma)


M > \max\{ |a_{11} e_1 + A_{12} \mathbf{e}_2| \}.

That is, positive constant M must be greater that a scaled version of the maximum possible estimator errors for the system (i.e., the initial errors, which are assumed to be bounded so that M can be picked large enough; al). If M is sufficiently large, it can be assumed that the system achieves e1 = 0 (i.e., \hat{x}_1 = x_1). Because e1 is constant (i.e., 0) along this manifold, \dot{e}_1 = 0 as well. Hence, the discontinuous control v(σ) may be replaced with the equivalent continuous control veq where

 0 = \dot{\sigma} = a_{11} \mathord{\overbrace{e_1}^{ {} = 0 }} + A_{12} \mathbf{e}_2 - \mathord{\overbrace{v_{\text{eq}}}^{v(\sigma)}} = A_{12} \mathbf{e}_2 - v_{\text{eq}}.


 \mathord{\overbrace{v_{\text{eq}}}^{\text{scalar}}} = \mathord{\overbrace{A_{12}}^{1 \times (n-1) \text{ vector}}} \mathord{\overbrace{\mathbf{e}_2}^{(n-1) \times 1 \text{ vector}}}.

This equivalent control veq represents the contribution from the other (n − 1) states to the trajectory of the output state x1. In particular, the row A12 acts like an output vector for the error subsystem

 \mathord{\overbrace{ \begin{bmatrix} \dot{e}_2\ \dot{e}_3\ \vdots\ \dot{e}_n \end{bmatrix} }^{\dot{\mathbf{e}}_2}} = A_2 \mathord{\overbrace{ \begin{bmatrix} e_2\ e_3\ \vdots\ e_n \end{bmatrix} }^{\mathbf{e}_2}} + L_2 v(e_1) = A_2 \mathbf{e}_2 + L_2 v_{\text{eq}} = A_2 \mathbf{e}_2 + L_2 A_{12} \mathbf{e}_2 = ( A_2 + L_2 A_{12} ) \mathbf{e}_2.

So, to ensure the estimator error \mathbf{e}_2 for the unmeasured states converges to zero, the (n-1)\times 1 vector L2 must be chosen so that the (n-1)\times (n-1) matrix (A2 + L2A12) is Hurwitz (i.e., the real part of each of its eigenvalues must be negative). Hence, provided that it is observable, this \mathbf{e}_2 system can be stabilized in exactly the same way as a typical linear state observer when A12 is viewed as the output matrix (i.e, "C"). That is, the veq equivalent control provides measurement information about the unmeasured states that can continually move their estimates asymptotically closer to them. Meanwhile, the discontinuous control v = M \operatorname{sign}( \hat{x}_1 - x ) forces the estimate of the measured state to have zero error in finite time. Additionally, white zero-mean symmetric measurement noise (e.g., Gaussian noise) only affects the switching frequency of the control v, and hence the noise will have little effect on the equivalent sliding mode control veq. Hence, the sliding mode observer has Kalman filter–like features.[7]

The final version of the observer is thus

\begin{align} \dot{\hat{\mathbf{x}}} &= A \hat{\mathbf{x}} + B \mathbf{u} + L M \operatorname{sign}(\hat{x}_1 - x_1)\ &= A \hat{\mathbf{x}} + B \mathbf{u} + \begin{bmatrix} -1\\L_2 \end{bmatrix} M \operatorname{sign}(\hat{x}_1 - x_1)\ &= A \hat{\mathbf{x}} + B \mathbf{u} + \begin{bmatrix} -M\\L_2 M\end{bmatrix} \operatorname{sign}(\hat{x}_1 - x_1)\ &= A \hat{\mathbf{x}} + \begin{bmatrix} B & \begin{bmatrix} -M\\L_2 M\end{bmatrix} \end{bmatrix} \begin{bmatrix} \mathbf{u} \\ \operatorname{sign}(\hat{x}_1 - x_1) \end{bmatrix}\ &= A_{\text{obs}} \hat{\mathbf{x}} + B_{\text{obs}} \mathbf{u}_{\text{obs}} \end{align}


  • A_{\text{obs}} \triangleq A,
  • B_{\text{obs}} \triangleq \begin{bmatrix} B & \begin{bmatrix} -M\\L_2 M\end{bmatrix} \end{bmatrix}, and
  • u_{\text{obs}} \triangleq \begin{bmatrix} \mathbf{u} \\ \operatorname{sign}(\hat{x}_1 - x_1) \end{bmatrix}.

That is, by augmenting the control vector \mathbf{u} with the switching function \operatorname{sign}(\hat{x}_1-x_1), the sliding mode observer can be implemented as an LTI system. That is, the discontinuous signal \operatorname{sign}(\hat{x}_1-x_1) is viewed as a control input to the 2-input LTI system.

For simplicity, this example assumes that the sliding mode observer has access to a measurement of a single state (i.e., output y = x1). However, a similar procedure can be used to design a sliding mode observer for a vector of weighted combinations of states (i.e., when output \mathbf{y} = C \mathbf{x} uses a generic matrix C). In each case, the sliding mode will be the manifold where the estimated output \hat{\mathbf{y}} follows the measured output \mathbf{y} with zero error (i.e., the manifold where \sigma(\mathbf{x}) \triangleq \hat{\mathbf{y}} - \mathbf{y} = \mathbf{0}).

See also


  1. ^ Other pulse-type modulation techniques include delta-sigma modulation.


  1. ^ a b Zinober, A.S.I., ed (1990). Deterministic control of uncertain systems. London: Peter Peregrinus Press. ISBN 978-0863411700.  
  2. ^ Utkin, Vadim I. (1993), "Sliding Mode Control Design Principles and Applications to Electric Drives", IEEE Transactions on Industrial Electronics (IEEE) 40 (1): 23–36  
  3. ^ a b c d Khalil, H.K. (2002). Nonlinear Systems (3rd edition ed.). Upper Saddle River, NJ: Prentice Hall. ISBN 0-13-067389-7.  
  4. ^ Filippov, A.F. (1988). Differential Equations with Discontinuous Right-hand Sides. Kluwer. ISBN 978-9027726995.  
  5. ^ Perruquetti, W.; Barbot, J.P. (2002), Sliding Mode Control in Engineering, Marcel Dekker Hardcover  
  6. ^ Utkin, Vadim; Guldner, Jürgen; Shi, Jingxin (1999), Sliding Mode Control in Electromechanical Systems, Philadelphia, PA: Taylor & Francis, Inc., ISBN 0-7484-0116-4  
  7. ^ a b Drakunov, S.V. (1983), "An adaptive quasioptimal filter with discontinuous parameters", Automation and Remote Control 44 (9): 1167–1175  
  8. ^ Drakunov, S.V. (1992), "Sliding-Mode Observers Based on Equivalent Control Method", Proceedings of the 31st IEEE Conference on Decision and Control (CDC), (Tucson, Arizona, December 16–18): 2368–2370, ISBN 0-7803-0872-7,  

Further reading

  • Acary, V.; Brogliato, B. (2008). Numerical Methods for Nonsmooth Dynamical Systems. Applications in Mechanics and Electronics. Heidelberg: Springer-Verlag, LNACM 35. ISBN 978-3-540-75391-9.  
  • Edwards, Cristopher; Fossas Colet, Enric; Fridman, Leonid, eds. (2006), Advances in Variable Structure and Sliding Mode Control, Lecture Notes in Control and Information Sciences, vol 334, Berlin: Springer-Verlag, ISBN 978-3-540-32800-1  
  • Edwards, C.; Spurgeon, S. (1998). Sliding Mode Control: Theory and Applications. London: Taylor and Francis. ISBN 0-7484-0601-8.  
  • Utkin, V.I. (1992). Sliding Modes in Control and Optimization. Springer-Verlag. ISBN 978-0387535166.  
  • Zinober, Alan S.I., ed (1994). Variable Structure and Lyapunov Control. London: Springer-Verlag. doi:10.1007/BFb0033675. ISBN 978-3-540-19869-7.  


Got something to say? Make a comment.
Your name
Your email address