Lyapunov stability

Various types of stability may be discussed for the solutions of differential equations or difference equations describing dynamical systems. The most important type is that concerning the stability of solutions near to a point of equilibrium. This may be discussed by the theory of Aleksandr Lyapunov. In simple terms, if the solutions that start out near an equilibrium point xe{\displaystyle x_{e}} stay near xe{\displaystyle x_{e}} forever, then xe{\displaystyle x_{e}} is Lyapunov stable. More strongly, if xe{\displaystyle x_{e}} is Lyapunov stable and all solutions that start out near xe{\displaystyle x_{e}} converge to xe{\displaystyle x_{e}}, then xe{\displaystyle x_{e}} is said to be asymptotically stable (see asymptotic analysis). The notion of exponential stability guarantees a minimal rate of decay, i.e., an estimate of how quickly the solutions converge. The idea of Lyapunov stability can be extended to infinite-dimensional manifolds, where it is known as structural stability, which concerns the behavior of different but "nearby" solutions to differential equations. Input-to-state stability (ISS) applies Lyapunov notions to systems with inputs.

History

Lyapunov stability is named after Aleksandr Mikhailovich Lyapunov, a Russian mathematician who defended the thesis The General Problem of Stability of Motion at Kharkov University (now VN Karazin Kharkiv National University) in 1892.[1] A. M. Lyapunov was a pioneer in successful endeavors to develop a global approach to the analysis of the stability of nonlinear dynamical systems by comparison with the widely spread local method of linearizing them about points of equilibrium. His work, initially published in Russian and then translated to French, received little attention for many years. The mathematical theory of stability of motion, founded by A. M. Lyapunov, considerably anticipated the time for its implementation in science and technology. Moreover Lyapunov did not himself make application in this field, his own interest being in the stability of rotating fluid masses with astronomical application. He did not have doctoral students who followed the research in the field of stability and his own destiny was terribly tragic because of his suicide in 1918.[2] For several decades the theory of stability sank into complete oblivion. The Russian-Soviet mathematician and mechanician Nikolay Gur'yevich Chetaev working at the Kazan Aviation Institute in the 1930s was the first who realized the incredible magnitude of the discovery made by A. M. Lyapunov. The contribution to the theory made by N. G. Chetaev[3] was so significant that many mathematicians, physicists and engineers consider him Lyapunov's direct successor and the next-in-line scientific descendant in the creation and development of the mathematical theory of stability.

The interest in it suddenly skyrocketed during the Cold War period when the so-called "Second Method of Lyapunov" (see below) was found to be applicable to the stability of aerospace guidance systems which typically contain strong nonlinearities not treatable by other methods. A large number of publications appeared then and since in the control and systems literature.[4][5][6][7][8] More recently the concept of the Lyapunov exponent (related to Lyapunov's First Method of discussing stability) has received wide interest in connection with chaos theory. Lyapunov stability methods have also been applied to finding equilibrium solutions in traffic assignment problems.[9]

Definition for continuous-time systems

Consider an autonomous nonlinear dynamical system

x˙=f(x(t)),x(0)=x0{\displaystyle {\dot {x}}=f(x(t)),\;\;\;\;x(0)=x_{0}},

where x(t)DRn{\displaystyle x(t)\in {\mathcal {D}}\subseteq \mathbb {R} ^{n}} denotes the system state vector, D{\displaystyle {\mathcal {D}}} an open set containing the origin, and f:DRn{\displaystyle f:{\mathcal {D}}\rightarrow \mathbb {R} ^{n}} is a continuous vector field on D{\displaystyle {\mathcal {D}}}. Suppose f{\displaystyle f} has an equilibrium at xe{\displaystyle x_{e}}, so that f(xe)=0{\displaystyle f(x_{e})=0}. Then:

  1. This equilibrium is said to be Lyapunov stable if for every ϵ>0{\displaystyle \epsilon >0} there exists a δ>0{\displaystyle \delta >0} such that if x(0)xe<δ{\displaystyle \|x(0)-x_{e}\|<\delta } then for every t0{\displaystyle t\geq 0} we have x(t)xe<ϵ{\displaystyle \|x(t)-x_{e}\|<\epsilon }.
  2. The equilibrium of the above system is said to be asymptotically stable if it is Lyapunov stable and there exists δ>0{\displaystyle \delta >0} such that if x(0)xe<δ{\displaystyle \|x(0)-x_{e}\|<\delta } then limtx(t)xe=0{\displaystyle \lim _{t\rightarrow \infty }\|x(t)-x_{e}\|=0}.
  3. The equilibrium of the above system is said to be exponentially stable if it is asymptotically stable and there exist α>0, β>0, δ>0{\displaystyle \alpha >0,~\beta >0,~\delta >0} such that if x(0)xe<δ{\displaystyle \|x(0)-x_{e}\|<\delta } then x(t)xeαx(0)xeeβt{\displaystyle \|x(t)-x_{e}\|\leq \alpha \|x(0)-x_{e}\|e^{-\beta t}} for all t0{\displaystyle t\geq 0}.

Conceptually, the meanings of the above terms are the following:

  1. Lyapunov stability of an equilibrium means that solutions starting "close enough" to the equilibrium (within a distance δ{\displaystyle \delta } from it) remain "close enough" forever (within a distance ϵ{\displaystyle \epsilon } from it). Note that this must be true for anyϵ{\displaystyle \epsilon } that one may want to choose.
  2. Asymptotic stability means that solutions that start close enough not only remain close enough but also eventually converge to the equilibrium.
  3. Exponential stability means that solutions not only converge, but in fact converge faster than or at least as fast as a particular known rate αx(0)xeeβt{\displaystyle \alpha \|x(0)-x_{e}\|e^{-\beta t}}.

The trajectory ϕ(t){\displaystyle \phi (t)} is (locally) attractive if

x(t)ϕ(t)0{\displaystyle \|x(t)-\phi (t)\|\rightarrow 0} as t{\displaystyle t\rightarrow \infty }

for all trajectories x(t){\displaystyle x(t)} that start close enough to ϕ(t){\displaystyle \phi (t)}, and globally attractive if this property holds for all trajectories.

That is, if x belongs to the interior of its stable manifold, it is asymptotically stable if it is both attractive and stable. (There are examples showing that attractivity does not imply asymptotic stability.[10][11][12] Such examples are easy to create using homoclinic connections.)

If the Jacobian of the dynamical system at an equilibrium happens to be a stability matrix (i.e., if the real part of each eigenvalue is strictly negative), then the equilibrium is asymptotically stable.

System of deviations

Instead of considering stability only near an equilibrium point (a constant solution x(t)=xe{\displaystyle x(t)=x_{e}}), one can formulate similar definitions of stability near an arbitrary solution x(t)=ϕ(t){\displaystyle x(t)=\phi (t)}. However, one can reduce the more general case to that of an equilibrium by a change of variables called a "system of deviations". Define y=xϕ(t){\displaystyle y=x-\phi (t)}, obeying the differential equation:

y˙=f(t,y+ϕ(t))ϕ˙(t)=g(t,y){\displaystyle {\dot {y}}=f(t,y+\phi (t))-{\dot {\phi }}(t)=g(t,y)}.

This is no longer an autonomous system, but it has a guaranteed equilibrium point at y=0{\displaystyle y=0} whose stability is equivalent to the stability of the original solution x(t)=ϕ(t){\displaystyle x(t)=\phi (t)}.

Lyapunov's second method for stability

Lyapunov, in his original 1892 work, proposed two methods for demonstrating stability.[1] The first method developed the solution in a series which was then proved convergent within limits. The second method, which is now referred to as the Lyapunov stability criterion or the Direct Method, makes use of a Lyapunov function V(x) which has an analogy to the potential function of classical dynamics. It is introduced as follows for a system x˙=f(x){\displaystyle {\dot {x}}=f(x)} having a point of equilibrium at x=0{\displaystyle x=0}. Consider a function V:RnR{\displaystyle V:\mathbb {R} ^{n}\rightarrow \mathbb {R} } such that

  • V(x)=0{\displaystyle V(x)=0} if and only if x=0{\displaystyle x=0}
  • V(x)>0{\displaystyle V(x)>0} if and only if x0{\displaystyle x\neq 0}
  • V˙(x)=ddtV(x)=i=1nVxifi(x)=Vf(x)0{\displaystyle {\dot {V}}(x)={\frac {d}{dt}}V(x)=\sum _{i=1}^{n}{\frac {\partial V}{\partial x_{i}}}f_{i}(x)=\nabla V\cdot f(x)\leq 0} for all values of x0{\displaystyle x\neq 0} . Note: for asymptotic stability, V˙(x)<0{\displaystyle {\dot {V}}(x)<0} for x0{\displaystyle x\neq 0} is required.

Then V(x) is called a Lyapunov function and the system is stable in the sense of Lyapunov. (Note that V(0)=0{\displaystyle V(0)=0} is required; otherwise for example V(x)=1/(1+|x|){\displaystyle V(x)=1/(1+|x|)} would "prove" that x˙(t)=x{\displaystyle {\dot {x}}(t)=x} is locally stable.) An additional condition called "properness" or "radial unboundedness" is required in order to conclude global stability. Global asymptotic stability (GAS) follows similarly.

It is easier to visualize this method of analysis by thinking of a physical system (e.g. vibrating spring and mass) and considering the energy of such a system. If the system loses energy over time and the energy is never restored then eventually the system must grind to a stop and reach some final resting state. This final state is called the attractor. However, finding a function that gives the precise energy of a physical system can be difficult, and for abstract mathematical systems, economic systems or biological systems, the concept of energy may not be applicable.

Lyapunov's realization was that stability can be proven without requiring knowledge of the true physical energy, provided a Lyapunov function can be found to satisfy the above constraints.

Definition for discrete-time systems

The definition for discrete-time systems is almost identical to that for continuous-time systems. The definition below provides this, using an alternate language commonly used in more mathematical texts.

Let (X, d) be a metric space and f : XX a continuous function. A point x in X is said to be Lyapunov stable, if,

ϵ>0 δ>0 yX [d(x,y)<δnN d(fn(x),fn(y))<ϵ].{\displaystyle \forall \epsilon >0\ \exists \delta >0\ \forall y\in X\ \left[d(x,y)<\delta \Rightarrow \forall n\in \mathbf {N} \ d\left(f^{n}(x),f^{n}(y)\right)<\epsilon \right].}

We say that x is asymptotically stable if it belongs to the interior of its stable set, i.e. if,

δ>0[d(x,y)<δlimnd(fn(x),fn(y))=0].{\displaystyle \exists \delta >0\left[d(x,y)<\delta \Rightarrow \lim _{n\to \infty }d\left(f^{n}(x),f^{n}(y)\right)=0\right].}

Stability for linear state space models

A linear state space model

x˙=Ax{\displaystyle {\dot {\textbf {x}}}=A{\textbf {x}}},

where A{\displaystyle A} is a finite matrix, is asymptotically stable (in fact, exponentially stable) if all real parts of the eigenvalues of A{\displaystyle A} are negative. This condition is equivalent to the following one:[13]

ATM+MA{\displaystyle A^{\textsf {T}}M+MA}

is negative definite for some positive definite matrix M=MT{\displaystyle M=M^{\textsf {T}}}. (The relevant Lyapunov function is V(x)=xTMx{\displaystyle V(x)=x^{\textsf {T}}Mx}.)

Correspondingly, a time-discrete linear state space model

xt+1=Axt{\displaystyle {\textbf {x}}_{t+1}=A{\textbf {x}}_{t}}

is asymptotically stable (in fact, exponentially stable) if all the eigenvalues of A{\displaystyle A} have a modulus smaller than one.

This latter condition has been generalized to switched systems: a linear switched discrete time system (ruled by a set of matrices {A1,,Am}{\displaystyle \{A_{1},\dots ,A_{m}\}})

xt+1=Aitxt,Ait{A1,,Am}{\displaystyle {{\textbf {x}}_{t+1}}=A_{i_{t}}{\textbf {x}}_{t},\quad A_{i_{t}}\in \{A_{1},\dots ,A_{m}\}}

is asymptotically stable (in fact, exponentially stable) if the joint spectral radius of the set {A1,,Am}{\displaystyle \{A_{1},\dots ,A_{m}\}} is smaller than one.

Stability for systems with inputs

A system with inputs (or controls) has the form

x˙=f(x,u){\displaystyle {\dot {\textbf {x}}}={\textbf {f}}({\textbf {x}},{\textbf {u}})}

where the (generally time-dependent) input u(t) may be viewed as a control, external input, stimulus, disturbance, or forcing function. It has been shown [14] that near to a point of equilibrium which is Lyapunov stable the system remains stable under small disturbances. For larger input disturbances the study of such systems is the subject of control theory and applied in control engineering. For systems with inputs, one must quantify the effect of inputs on the stability of the system. The main two approaches to this analysis are BIBO stability (for linear systems) and input-to-state stability (ISS) (for nonlinear systems)

Example

This example shows a system where a Lyapunov function can be used to prove Lyapunov stability but cannot show asymptotic stability. Consider the following equation, based on the Van der Pol oscillator equation with the friction term changed:

y¨+yε(y˙33y˙)=0.{\displaystyle {\ddot {y}}+y-\varepsilon \left({\frac {{\dot {y}}^{3}}{3}}-{\dot {y}}\right)=0.}

Let

x1=y,x2=y˙{\displaystyle x_{1}=y,x_{2}={\dot {y}}}

so that the corresponding system is

x˙1=x2,x˙2=x1+ε(x233x2).{\displaystyle {\begin{aligned}&{\dot {x}}_{1}=x_{2},\\&{\dot {x}}_{2}=-x_{1}+\varepsilon \left({\frac {x_{2}^{3}}{3}}-{x_{2}}\right).\end{aligned}}}

The origin x1=0, x2=0{\displaystyle x_{1}=0,\ x_{2}=0} is the only equilibrium point. Let us choose as a Lyapunov function

V=12(x12+x22){\displaystyle V={\frac {1}{2}}\left(x_{1}^{2}+x_{2}^{2}\right)}

which is clearly positive definite. Its derivative is

V˙=x1x˙1+x2x˙2=x1x2x1x2+εx243εx22=εx243εx22.{\displaystyle {\dot {V}}=x_{1}{\dot {x}}_{1}+x_{2}{\dot {x}}_{2}=x_{1}x_{2}-x_{1}x_{2}+\varepsilon {\frac {x_{2}^{4}}{3}}-\varepsilon {x_{2}^{2}}=\varepsilon {\frac {x_{2}^{4}}{3}}-\varepsilon {x_{2}^{2}}.}

It seems that if the parameter ε{\displaystyle \varepsilon } is positive, stability is asymptotic for x22<3.{\displaystyle x_{2}^{2}<3.} But this is wrong, since V˙{\displaystyle {\dot {V}}} does not depend on x1{\displaystyle x_{1}}, and will be 0 everywhere on the x1{\displaystyle x_{1}} axis. The equilibrium is Lyapunov stable but not asymptotically stable.

Barbalat's lemma and stability of time-varying systems

It may be difficult to find a Lyapunov function with a negative definite derivative as required by the Lyapunov stability criterion, however a function V{\displaystyle V} with V˙{\displaystyle {\dot {V}}} that is only negative semi-definite may be available. In autonomous systems, the invariant set theorem can be applied to prove asymptotic stability, but this theorem is not applicable when the dynamics are a function of time.[15]

Instead, Barbalat's lemma allows for Lyapunov-like analysis of these non-autonomous systems. The lemma is motivated by the following observations. Assuming f is a function of time only:

  • Having f˙(t)0{\displaystyle {\dot {f}}(t)\to 0} does not imply that f(t){\displaystyle f(t)} has a limit at t{\displaystyle t\to \infty }. For example, f(t)=sin(ln(t)),t>0{\displaystyle f(t)=\sin(\ln(t)),\;t>0}.
  • Having f(t){\displaystyle f(t)} approaching a limit as t{\displaystyle t\to \infty } does not imply that f˙(t)0{\displaystyle {\dot {f}}(t)\to 0}. For example, f(t)=sin(t2)/t,t>0{\displaystyle f(t)=\sin \left(t^{2}\right)/t,\;t>0}.
  • Having f(t){\displaystyle f(t)} lower bounded and decreasing (f˙0{\displaystyle {\dot {f}}\leq 0}) implies it converges to a limit. But it does not say whether or not f˙0{\displaystyle {\dot {f}}\to 0} as t{\displaystyle t\to \infty }.

Barbalat's Lemma says: If f(t){\displaystyle f(t)} has a finite limit as t{\displaystyle t\to \infty } and if f˙{\displaystyle {\dot {f}}} is uniformly continuous (a sufficient condition for uniform continuity is that f¨{\displaystyle {\ddot {f}}} is bounded), then f˙(t)0{\displaystyle {\dot {f}}(t)\to 0} as t{\displaystyle t\to \infty }.[16]

An alternative version is as follows: Let p[1,){\displaystyle p\in [1,\infty )} and q(1,]{\displaystyle q\in (1,\infty ]}. If fLp(0,){\displaystyle f\in L^{p}(0,\infty )} and f˙Lq(0,){\displaystyle {\dot {f}}\in L^{q}(0,\infty )}, then f(t)0{\displaystyle f(t)\to 0} as t.{\displaystyle t\to \infty .}[17]

In the following form the Lemma is true also in the vector valued case: Let f(t){\displaystyle f(t)} be a uniformly continuous function with values in a Banach space E{\displaystyle E} and assume that 0tf(τ)dτ{\displaystyle \textstyle \int _{0}^{t}f(\tau )\mathrm {d} \tau } has a finite limit as t{\displaystyle t\to \infty }. Then f(t)0{\displaystyle f(t)\to 0} as t{\displaystyle t\to \infty }.[18]

The following example is taken from page 125 of Slotine and Li's book Applied Nonlinear Control.[15]

Consider a non-autonomous system

e˙=e+gw(t){\displaystyle {\dot {e}}=-e+g\cdot w(t)}
g˙=ew(t).{\displaystyle {\dot {g}}=-e\cdot w(t).}

This is non-autonomous because the input w{\displaystyle w} is a function of time. Assume that the input w(t){\displaystyle w(t)} is bounded.

Taking V=e2+g2{\displaystyle V=e^{2}+g^{2}} gives V˙=2e20.{\displaystyle {\dot {V}}=-2e^{2}\leq 0.}

This says that V(t)V(0){\displaystyle V(t)\leq V(0)} by first two conditions and hence e{\displaystyle e} and g{\displaystyle g} are bounded. But it does not say anything about the convergence of e{\displaystyle e} to zero, as V˙{\displaystyle {\dot {V}}} is only negative semi-definite (note g{\displaystyle g} can be non-zero when V˙{\displaystyle {\dot {V}}}=0) and the dynamics are non-autonomous.

Using Barbalat's lemma:

V¨=4e(e+gw){\displaystyle {\ddot {V}}=-4e(-e+g\cdot w)}.

This is bounded because e{\displaystyle e}, g{\displaystyle g} and w{\displaystyle w} are bounded. This implies V˙0{\displaystyle {\dot {V}}\to 0} as t{\displaystyle t\to \infty } and hence e0{\displaystyle e\to 0}. This proves that the error converges.

Stability of time-varying systems with vanishing and bounded perturbations

Consider the auxiliary differential equation v˙(t)=q(t)β(v(t))+e(t),{\displaystyle {\dot {v}}(t)=-q(t)\beta (v(t))+e(t),} for all tt0{\displaystyle t\geq t_{0}}, with state vR{\displaystyle v\in \mathbb {R} } and initial condition v(t0)0{\displaystyle v(t_{0})\geq 0}. The function βC0(R,R){\displaystyle \beta \in C^{0}(\mathbb {R} ,\mathbb {R} )} is strictly increasing and satisfies β(0)=0{\displaystyle \beta (0)=0}. The functions e{\displaystyle e} and q{\displaystyle q} belong to C0(R,R+){\displaystyle C^{0}(\mathbb {R} ,\mathbb {R} _{+})}. The importance of the prior differential equation is that a wide class of Lyapunov inequalities can be linked to it by a comparison principle.

Assume that for all tt0{\displaystyle t\geq t_{0}}, q(t)>0{\displaystyle q(t)>0} and with t0q(t)dt={\displaystyle \int _{t_{0}}^{\infty }q(t)\,dt=\infty } and limte(t)q(t)=LR+{}.{\displaystyle \lim _{t\to \infty }{\frac {e(t)}{q(t)}}=L\in \mathbb {R} _{+}\cup \{\infty \}.} The prior property indicates a bounded perturbation when L>0{\displaystyle L>0} and a vanishing perturbation when L=0{\displaystyle L=0}.

For each initial condition v(t0)0{\displaystyle v(t_{0})\geq 0} and each solution v(t){\displaystyle v(t)} with maximal interval of existence [t0,ω){\displaystyle [t_{0},\omega )}, where t0<ω{\displaystyle t_{0}<\omega \leq \infty }, the following properties hold: [19]

  1. v(t)0{\displaystyle v(t)\geq 0} for all t[t0,ω){\displaystyle t\in [t_{0},\omega )}.
  2. If L[0,){\displaystyle L\in [0,\infty )} and LRange{β}{\displaystyle L\in \mathrm {Range} \{\beta \}}, then ω={\displaystyle \omega =\infty }, v<{\displaystyle \|v\|_{\infty }<\infty } and limtv(t)=β1(L){\displaystyle \lim _{t\to \infty }v(t)=\beta ^{-1}(L)}.
  3. If L={\displaystyle L=\infty }, v{\displaystyle v} is not uniformly zero, and limsβ(s)={\displaystyle \lim _{s\to \infty }\beta (s)=\infty }, then ω={\displaystyle \omega =\infty } and limtv(t)={\displaystyle \lim _{t\to \infty }v(t)=\infty }.

The prior results have been also derived in the literature within different contexts; see, e.g., [20][21][22][23][24] .

Lyapunov Stability of time-varying systems with unbounded perturbations

We present results from [25] which are related to Lyapunov inequalities with unbounded perturbations. Consider the system:

ζ˙(t)=g(t,ζ(t));tt0,{\displaystyle {\dot {\zeta }}(t)=g{\big (}t,\zeta (t){\big )};t\geq t_{0},}ζ(t0)=ζ0,{\displaystyle \zeta (t_{0})=\zeta _{0},}

where (t0,ζ0)R×Rm{\displaystyle (t_{0},\zeta _{0})\in \mathbb {R} \times \mathbb {R} ^{m}}, solution ζ(t){\displaystyle \zeta (t)} in Rm{\displaystyle \mathbb {R} ^{m}} (m{\displaystyle m} is a strictly positive integer), and a well-defined function g:[t0,)×RmRm{\displaystyle g:[t_{0},\infty )\times \mathbb {R} ^{m}\rightarrow \mathbb {R} ^{m}} with g(t,0)=0{\displaystyle g(t,0)=0}, tt0{\displaystyle \forall t\geq t_{0}}.

Assume that the system satisfies Carathéodory conditions; that is the mapping tg(t,ζ){\displaystyle t\mapsto g(t,\zeta )} is locally essentially bounded on [t0,)×Rm{\displaystyle [t_{0},\infty )\times \mathbb {R} ^{m}}, is measurable for every ζRm{\displaystyle \zeta \in \mathbb {R} ^{m}} and is continuous for almost every tt0{\displaystyle t\geq t_{0}}. The system admits a locally absolutely continuous local Carathéodory solution that is defined on a maximal interval [t0,ω){\displaystyle [t_{0},\omega )}.

Assume that there exist constants α>0{\displaystyle \alpha >0}, β>0{\displaystyle \beta >0} with α<β{\displaystyle \alpha <\beta }, locally absolutely continuous functions r1C0(R,R){\displaystyle r_{1}\in C^{0}(\mathbb {R} ,\mathbb {R} )}, r2C0(R,R){\displaystyle r_{2}\in C^{0}(\mathbb {R} ,\mathbb {R} )} and a Lebesgue measurable function h:RR{\displaystyle h:\mathbb {R} \rightarrow \mathbb {R} } satisfying the following:

(i) (1)α=1{\displaystyle (-1)^{\alpha }=-1} and (1)β{\displaystyle (-1)^{\beta }} is well-defined (we do not need (1)β{\displaystyle (-1)^{\beta }} to be well-defined if the positivity of solutions is guaranteed).

(ii) r1(t)>0{\displaystyle r_{1}(t)>0}, r2(t)>0{\displaystyle r_{2}(t)>0}, t>t0{\displaystyle \forall t>t_{0}} and h(t)>0{\displaystyle h(t)>0} for almost all t>t0{\displaystyle t>t_{0}}.

(iii) limtr2(t)r1(t)={\displaystyle \lim _{t\rightarrow \infty }{\frac {r_{2}(t)}{r_{1}(t)}}=\infty } (from which it follows that the perturbation is unbounded, as illustrated by the Lyapunov inequality given later).

(iv) limtt0tr1(τ)h(τ),dτ=,{\displaystyle \lim _{t\rightarrow \infty }\int _{t_{0}}^{t}r_{1}(\tau )h(\tau ),d\tau =\infty ,} and limtΛ(t)=0{\displaystyle \lim _{t\rightarrow \infty }\Lambda (t)=0}, where Λ(t):=r1(t)r˙2(t)r˙1(t)r2(t)h(t)(r1(t))2βα1βα(r2(t))β2α+1βα, for almost all t>t0.{\displaystyle \Lambda (t):={\frac {r_{1}(t){\dot {r}}_{2}(t)-{\dot {r}}_{1}(t)r_{2}(t)}{h(t)(r_{1}(t))^{\frac {2\beta -\alpha -1}{\beta -\alpha }}(r_{2}(t))^{\frac {\beta -2\alpha +1}{\beta -\alpha }}}},{\mbox{ for almost all }}t>t_{0}.}

(v) for each solution ζ(t){\displaystyle \zeta (t)} of the system with maximal interval of existence [t0,ω){\displaystyle [t_{0},\omega )}, there exist positive constants δ{\displaystyle \delta }, σ{\displaystyle \sigma }, c1{\displaystyle c_{1}}, c2{\displaystyle c_{2}}, and a Lyapunov function VC1(R×Rm,R+){\displaystyle V\in C^{1}(\mathbb {R} \times \mathbb {R} ^{m},\mathbb {R} _{+})}, satisfying

c1|κ|σV(t,κ)c2|κ|σ,tR,κRm,{\displaystyle c_{1}|\kappa |^{\sigma }\leq V(t,\kappa )\leq c_{2}|\kappa |^{\sigma },\forall t\in \mathbb {R} ,\forall \kappa \in \mathbb {R} ^{m},}

V(t,κ)t|κ=ζ(t)+V(t,κ)κ|κ=ζ(t)g(t,ζ(t))(r1(t)Vα(t,ζ(t))+r2(t)Vβ(t,ζ(t)))h(t),{\displaystyle {\frac {\partial V(t,\kappa )}{\partial t}}{\bigg |}{\kappa =\zeta (t)}+{\frac {\partial V(t,\kappa )}{\partial \kappa }}{\bigg |}{\kappa =\zeta (t)}\cdot g{\big (}t,\zeta (t){\big )}\leq {\big (}-r_{1}(t)V^{\alpha }(t,\zeta (t))+r_{2}(t)V^{\beta }(t,\zeta (t)){\big )}h(t),}

for almost every t(t0,ω){\displaystyle t\in (t_{0},\omega )} that satisfies V(t,ζ(t))<δ{\displaystyle V(t,\zeta (t))<\delta }. Then, there exists c3>0{\displaystyle c_{3}>0} such that for each |ζ0|<c3{\displaystyle |\zeta _{0}|<c_{3}}, one get ω={\displaystyle \omega =\infty } and

|ζ(t)|c2c1σ|ζ0|,tt0,{\displaystyle |\zeta (t)|\leq {\sqrt[{\sigma }]{\frac {c_{2}}{c_{1}}}}|\zeta _{0}|,\forall t\geq t_{0},}

so that ζ=0{\displaystyle \zeta =0} is uniformly stable. Furthermore, the origin is asymptotically stable. [25]

Example. The population growth model with Allee effect can be represented by the differential equation N˙(t)=RN(t)(N(t)A1)(1N(t)K){\displaystyle {\dot {N}}(t)=RN(t)\left({\frac {N(t)}{A}}-1\right)\left(1-{\frac {N(t)}{K}}\right)}; where N{\displaystyle N} is the population density, has been extensively studied in the literature. The positive constants R{\displaystyle R}, K{\displaystyle K}, A{\displaystyle A} represent respectively, the decay rate, the carrying capacity and Allee threshold. In this example we generalize the prior cubic growth model to the time-varying case

N˙(t)=R(t)N(t)(N(t)A(t)1)(1N(t)K(t)),{\displaystyle {\dot {N}}(t)=R(t)N(t)\left({\frac {N(t)}{A(t)}}-1\right)\left(1-{\frac {N(t)}{K(t)}}\right),}

where tt0{\displaystyle t\geq t_{0}}, state N(t)R{\displaystyle N(t)\in \mathbb {R} }, a Lebesgue measurable function R:RR{\displaystyle R:\mathbb {R} \rightarrow \mathbb {R} } with R(t)>0{\displaystyle R(t)>0} for almost all t>t0{\displaystyle t>t_{0}}, and locally absolutely continuous functions AC0(R,R){\displaystyle A\in C^{0}(\mathbb {R} ,\mathbb {R} )} and KC0(R,R){\displaystyle K\in C^{0}(\mathbb {R} ,\mathbb {R} )} so that A(t)>0{\displaystyle A(t)>0} and K(t)>0{\displaystyle K(t)>0} for all t>t0{\displaystyle t>t_{0}}.

The right-hand side of the equation is locally Lipschitz in N{\displaystyle N} and thus a unique solution exists with a maximal interval of existence [t0,ω){\displaystyle [t_{0},\omega )}. The origin N=0{\displaystyle N=0} is an equilibrium point. We aim to derive conditions that make N=0{\displaystyle N=0} a uniformly stable and an asymptotically stable extinction equilibrium. To this end, assume that

t0R(t)dt=,{\displaystyle \int _{t_{0}}^{\infty }R(t)dt=\infty ,}limt(1A(t)+1K(t))=,{\displaystyle \lim _{t\rightarrow \infty }\left({\frac {1}{A(t)}}+{\frac {1}{K(t)}}\right)=\infty ,} and limt(A˙(t)(K(t))2+K˙(t)(A(t))2R(t)A(t)K(t)(A(t)+K(t)))=0.{\displaystyle \lim _{t\rightarrow \infty }\left({\frac {{\dot {A}}(t)(K(t))^{2}+{\dot {K}}(t)(A(t))^{2}}{R(t)A(t)K(t)(A(t)+K(t))}}\right)=0.}

Let V=N2{\displaystyle V=N^{2}}. The inequality is satisfied with c1=1{\displaystyle c_{1}=1}, c2=1{\displaystyle c_{2}=1} and σ=2{\displaystyle \sigma =2}. Simple computations yield

V˙(t)2R(t)(V(t)+(1A(t)+1K(t))V32(t)) for almost all t>t0,{\displaystyle {\dot {V}}(t)\leq 2R(t)\left(-V(t)+\left({\frac {1}{A(t)}}+{\frac {1}{K(t)}}\right)V^{\frac {3}{2}}(t)\right){\mbox{ for almost all }}t>t_{0},}

which has the form of the inequality with α=1{\displaystyle \alpha =1}, β=32{\displaystyle \beta ={\frac {3}{2}}}, h(t)=2R(t){\displaystyle h(t)=2R(t)}, r1(t)=1{\displaystyle r_{1}(t)=1}, r2(t)=1A(t)+1K(t){\displaystyle r_{2}(t)={\frac {1}{A(t)}}+{\frac {1}{K(t)}}} and δ{\displaystyle \delta } is arbitrary in (t0,){\displaystyle (t_{0},\infty )}. Moreover, one can easily show the function Λ(t){\displaystyle \Lambda (t)} goes to zero as t{\displaystyle t} goes to infinity. Thus, all conditions are satisfied and thus there exists c3>0{\displaystyle c_{3}>0} such that for each |N0|<c3{\displaystyle |N_{0}|<c_{3}}, one get ω={\displaystyle \omega =\infty } and

|N(t)|c2c1σ|N0|,tt0,{\displaystyle |N(t)|\leq {\sqrt[{\sigma }]{\frac {c_{2}}{c_{1}}}}|N_{0}|,\forall t\geq t_{0},}

and hence N=0{\displaystyle N=0} is uniformly stable. In fact, N=0{\displaystyle N=0} is uniformly stable and is asymptotically stable.

See also

References

  1. ^ abLyapunov, A. M.The General Problem of the Stability of Motion (In Russian), Doctoral dissertation, Univ. Kharkiv 1892 English translations: (1) Stability of Motion, Academic Press, New-York & London, 1966 (2) The General Problem of the Stability of Motion, (A. T. Fuller trans.) Taylor & Francis, London 1992. Included is a biography by Smirnov and an extensive bibliography of Lyapunov's work.
  2. ^Shcherbakov 1992.
  3. ^Chetaev, N. G. On stable trajectories of dynamics, Kazan Univ Sci Notes, vol.4 no.1 1936; The Stability of Motion, Originally published in Russian in 1946 by ОГИЗ. Гос. изд-во технико-теорет. лит., Москва-Ленинград.Translated by Morton Nadler, Oxford, 1961, 200 pages.
  4. ^Letov, A. M. (1955). Устойчивость нелинейных регулируемых систем [Stability of Nonlinear Control Systems] (in Russian). Moscow: Gostekhizdat. English tr. Princeton 1961
  5. ^Kalman, R. E.; Bertram, J. F (1960). "Control System Analysis and Design Via the "Second Method" of Lyapunov: I—Continuous-Time Systems". Journal of Basic Engineering. 82 (2): 371–393. doi:10.1115/1.3662604.
  6. ^LaSalle, J. P.; Lefschetz, S. (1961). Stability by Lyapunov's Second Method with Applications. New York: Academic Press.
  7. ^Parks, P. C. (1962). "Liapunov's method in automatic control theory". Control. I Nov 1962 II Dec 1962.
  8. ^Kalman, R. E. (1963). "Lyapunov functions for the problem of Lur'e in automatic control". Proc Natl Acad Sci USA. 49 (2): 201–205. Bibcode:1963PNAS...49..201K. doi:10.1073/pnas.49.2.201. PMC 299777. PMID 16591048.
  9. ^Smith, M. J.; Wisten, M. B. (1995). "A continuous day-to-day traffic assignment model and the existence of a continuous dynamic user equilibrium". Annals of Operations Research. 60 (1): 59–79. doi:10.1007/BF02031940. S2CID 14034490.
  10. ^Hahn, Wolfgang (1967). Stability of Motion. Springer. pp. 191–194, Section 40. doi:10.1007/978-3-642-50085-5. ISBN 978-3-642-50087-9.
  11. ^Braun, Philipp; Grune, Lars; Kellett, Christopher M. (2021). (In-)Stability of Differential Inclusions: Notions, Equivalences, and Lyapunov-like Characterizations. Springer. pp. 19–20, Example 2.18. doi:10.1007/978-3-030-76317-6. ISBN 978-3-030-76316-9. S2CID 237964551.
  12. ^Vinograd, R. E. (1957). "The inadequacy of the method of characteristic exponents for the study of nonlinear differential equations". Doklady Akademii Nauk (in Russian). 114 (2): 239–240.
  13. ^Goh, B. S. (1977). "Global stability in many-species systems". The American Naturalist. 111 (977): 135–143. Bibcode:1977ANat..111..135G. doi:10.1086/283144. S2CID 84826590.
  14. ^Malkin I.G. Theory of Stability of Motion, Moscow 1952 (Gostekhizdat) Chap II para 4 (Russian) Engl. transl, Language Service Bureau, Washington AEC -tr-3352; originally On stability under constantly acting disturbances Prikl Mat 1944, vol. 8 no.3 241-245 (Russian); Amer. Math. Soc. transl. no. 8
  15. ^ abSlotine, Jean-Jacques E.; Weiping Li (1991). Applied Nonlinear Control. NJ: Prentice Hall.
  16. ^I. Barbălat, Systèmes d'équations différentielles d'oscillations non Linéaires, Rev. Math. Pures Appl. 4 (1959) 267–270, p. 269.
  17. ^B. Farkas et al., Variations on Barbălat's Lemma, Amer. Math. Monthly (2016) 128, no. 8, 825-830, DOI: 10.4169/amer.math.monthly.123.8.825, p. 827.
  18. ^B. Farkas et al., Variations on Barbălat's Lemma, Amer. Math. Monthly (2016) 128, no. 8, 825-830, DOI: 10.4169/amer.math.monthly.123.8.825, p. 826.
  19. ^ Naser, M. F. M. (2020). State convergence of a class of time-varying differential equations. IMA Journal of Mathematical Control and Information, 37, 27–38.
  20. ^ Naser, M. F. M. & Ikhouane, F. (2019). Stability of time–-varying systems in the absence of strict Lyapunov functions. IMA Journal of Mathematical Control and Information. DOI: 10.1093/imamci/dnx056.
  21. ^ Jiang, Z., Lin, Y. & Wang, Y. (2009). Stabilization of nonlinear time-varying systems: a control Lyapunov function approach. Journal of Systems Science and Complexity, 22, 683–696.
  22. ^ Malisoff, M. & Mazenc, F. (2005). Further remarks on strict input-to-state stable Lyapunov functions for time-varying systems. Automatica, 41, 1973–1978.
  23. ^ Mazenc, F. (2003). Strict Lyapunov functions for time-varying systems. Automatica, 39, 349–353.
  24. ^ Mu, X. & Cheng, D. (2005). On the stability and stabilization of time-varying nonlinear control systems. Asian Journal of Control,
  25. ^ abNaser, M. F. M., Behavior near time infinity of solutions of nonautonomous systems with unbounded perturbations, IMA Journal of Mathematical Control & Information, Theorem 6.2, 2022.

Further reading

This article incorporates material from asymptotically stable on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.