Author Archives: Jordan Jameson

Systems of Delay Differential Equations and the Lambert W Function Continued…

Now we reach the end of this particular journey down the road of DDE systems. As it turns out the problem of determining the stability of a DDE system with both a time dependent term and a delay term is very difficult. Once again consider the DDE system

\dot{\bf x}\left(t\right) = {\bf Ax}\left(t\right) + {\bf A}_d {\bf x}\left(t-\tau\right)

The method that we will use to solve this system is to use the Laplace transform with zero initial conditions.

s{\bf X}\left(s\right) = {\bf AX}\left(s\right) + {\bf A}_d {\bf X}\left(s\right)e^{-s\tau}

Once again, we divide out {\bf X}\left(s\right), rearrange, and are left with

\left(s{\bf I} - {\bf A}\right)e^{s\tau} = {\bf A}_d

In order to get this into the form necessary to use the Lambert W function, we multiply both sides by \tau e^{-{\bf A}\tau}.

\left(s{\bf I} - {\bf A}\right)e^{s\tau}\tau e^{-{\bf A}\tau} = {\bf A}_d \tau e^{-{\bf A}\tau}

\left(s{\bf I} - {\bf A}\right)\tau e^{\left(s{\bf I} - {\bf A}\right)\tau} = {\bf A}_d \tau e^{-{\bf A}\tau}

With our current knowledge of the Lambert W function, we have this problem pegged, right? We can use the Lambert W function to solve this equation easily.

s{\bf I} = \dfrac{1}{\tau}W_{k}\left({\bf A}_d \tau e^{-{\bf A}\tau}\right) + {\bf A}

This means the eigenvalues of the system can be found as

\mbox{det}\left(s{\bf I} - \left[\dfrac{1}{\tau}W_{k}\left({\bf A}_d \tau e^{-{\bf A}\tau}\right) + {\bf A}\right]\right) = 0

However, there is a problem with this conclusion. Remember our discussion on how matrices must commute (i.e. {\bf XY} = {\bf YX}) in order for them to satisfy the original exponential equation leading to the Lambert W function? Look again at this equation.

\left(s{\bf I} - {\bf A}\right)\tau e^{\left(s{\bf I} - {\bf A}\right)\tau} = {\bf A}_d \tau e^{-{\bf A}\tau}

Well it turns out that, in general, the matrices \left(s{\bf I} - {\bf A}\right)\tau and {\bf A}_d \tau e^{-{\bf A}\tau} do not commute. In other words, in general,

\left[\left(s{\bf I} - {\bf A}\right)\tau\right]\left[{\bf A}_d \tau e^{-{\bf A}\tau}\right] \neq \left[{\bf A}_d \tau e^{-{\bf A}\tau}\right]\left[\left(s{\bf I} - {\bf A}\right)\tau\right] (it’s true, run some numbers!!)

This is due to the fact that,for a general case, {\bf AA}_d \neq {\bf A}_d{\bf A}

As is turns out, there is a proposed method to deal with this problem which involves introducing an unknown matrix into the equation that forces commutivity. Thus, we will explore this in the next, and final, post.

Systems of Delay Differential Equations and the Lambert W Function

The evaluation of the Lambert W function for matrices must be understood in order to move forward in our understanding of solving DDE systems. For the DDE system given as

\dot{\bf x}\left(t\right) = {\bf A}_d{\bf x}\left(t-\tau\right)

The eigenvalues (in this case we will use s to denote the eigenvalues instead of the more common \lambda) of the system can be found as:

\mbox{det}\left(s{\bf I} - \dfrac{1}{\tau}W_{k}\left({\bf A}_{d}\tau\right)\right) = 0

Functions of matrices can generally be evaluated using the following relationship

F\left({\bf A}\right) = {\bf V}F\left({\bf J}\right){\bf V}^{-1}

where {\bf A} is a matrix, {\bf V} is a matrix of eigenvectors, and {\bf J} is the Jordan canonical form of the matrix {\bf A}. The Jordan canonical form of a matrix contains all the eigenvalues of the matrix and accounts for any repeated eigenvalues. As a quick example, consider a matrix {\bf A} with eigenvalues \lambda_{1} and \lambda_{2} and let’s assume that \lambda_{1} is repeated once and \lambda_{2} is repeated twice. This means that, since there are two repeated eigenvalues, there are two Jordan blocks. Then, {\bf J} can be written as

{\bf J} = \left[\begin{array}{ccccc}  \lambda_{1} & 1 & 0 & 0 & 0\\  0 & \lambda_{1} & 0 & 0 & 0\\  0 & 0 & \lambda_{2} & 1 & 0\\  0 & 0 & 0 & \lambda_{2} & 1\\  0 & 0 & 0 & 0 & \lambda_{2}  \end{array}\right] = \left[\begin{array}{cc} {\bf J}_{1} & {\bf 0}\\  {\bf 0} & {\bf J}_{2}\end{array}\right]

{\bf J}_{1} = \left[\begin{array}{cc}  \lambda_{1} & 1\\  0 & \lambda_{1}\end{array}\right]

{\bf J}_{2} = \left[\begin{array}{ccc}  \lambda_{2} & 1 & 0\\  0 & \lambda_{2} & 1\\  0 & 0 & \lambda_{2}\end{array}\right]

Each Jordan block accounts for a single repeated eigenvalue. Now, the expression F\left({\bf J}\right) is

F\left({\bf J}\right) = \left[\begin{array}{ccccc}  F\left(\lambda_{1}\right) & F'\left(\lambda_{1}\right) & 0 & 0 & 0\\  0 & F\left(\lambda_{1}\right) & 0 & 0 & 0\\  0 & 0 & F\left(\lambda_{2}\right) & F'\left(\lambda_{2}\right) & \frac{1}{2}F''\left(\lambda_{2}\right)\\  0 & 0 & 0 & F\left(\lambda_{2}\right) & F'\left(\lambda_{2}\right)\\  0 & 0 & 0 & 0 & F\left(\lambda_{2}\right)  \end{array}\right]

Thus, in general, the function of a single Jordan block (for a single repeated eigenvalue \lambda) can be written as

F\left({\bf J}\right) = \left[\begin{array}{ccccc}  F\left(\lambda\right) & F'\left(\lambda\right) & \frac{1}{2}F''\left(\lambda\right) & \cdots & \frac{1}{\left(n-1\right)!}F^{\left(n-1\right)}\left(\lambda\right)\\    0 & F\left(\lambda\right) & F'\left(\lambda\right) & \cdots & \frac{1}{\left(n-2\right)!}F^{\left(n-2\right)}\left(\lambda\right)\\    \vdots & \ddots & \ddots & \ddots & \vdots\\    {} & {} & {} & F\left(\lambda\right) & F'\left(\lambda\right)\\    0 & \cdots & \cdots & \cdots & F\left(\lambda\right)  \end{array}\right]

where n is the duplicity of the eigenvalue. If there are no repeated eigenvalues, the matrix {\bf J} becomes simple diagonal matrix with the eigenvalues on the diagonal (and zeros everywhere else). Therefore, since the Lambert W function is a function (same as sin or cos), the Lambert W of {\bf A} can be written as

W\left({\bf A}\right) = {\bf V}W\left({\bf J}\right){\bf V}^{-1}

Differentiating the defining equation x = W\left(x\right)e^{W\left(x\right)} for W and solving for W' (implicit differentiation), we obtain the following expression for the derivative of W

W'\left(x\right) = \dfrac{1}{\left(1 + W\left(x\right)\right)e^{W\left(x\right)}}
W'\left(x\right) = \dfrac{W\left(x\right)}{x\left(1 + W\left(x\right)\right)}, \mbox{ for } x\neq 0

Further derivatives of W can be taken and a general formulation of the nth derivative of W can be found in the paper “On the Lambert W Function” by Corless, et al. There is a Matlab function that will evaluate the Lambert W function for matrices (lambertwm), however you must search for it as it was written by a couple of doctoral students at Michigan.

One quick note before we move on to the current problems encountered with DDE systems. For the equation

{\bf Y}e^{\bf Y} = {\bf X},

where {\bf X} and {\bf Y} are matrices, the solution, {\bf Y} = W_{k}\left({\bf X}\right), is valid only when {\bf X} and {\bf Y} commute; i.e. when {\bf XY} = {\bf YX}. The proof is quite simple and satisfactory. First, assume that {\bf Y}e^{\bf Y} = {\bf X}. It follows that

{\bf XY} = {\bf Y}e^{\bf Y}{\bf Y} = {\bf Y}^2e^{\bf Y} = {\bf YX}

since {\bf Y} will always commute with its own matrix exponential, e^{\bf Y}. Thus, it can be seen that the two matrices satisfy the original exponential equation if and only if they commute, and thus satisfying the Lambert W function logically follows.

The next type of system that we will encounter is a DDE system with both a delay term and a non-delay term.

{\bf x}\left(t\right) = {\bf Ax}\left(t\right) + {\bf A}_d{\bf x}\left(t-\tau\right)

More on this to come. Stay tuned!

Delay Differential Equations and the Lambert W Function Continued…

In order to get a more firm grasp on the Lambert W function and how it can be used to solve delay differential equations, let’s consider a system with a scalar state and a delay on the input to the system.

\dot{x}\left(t\right) = ax\left(t\right) + a_{d}u\left(t-\tau\right)

In this case u\left(t\right) is the system input. In traditional differential equation language, this is a non-homogeneous delay differential equation (similar to the first post but this time there is also a non-delay term). So, let’s have the state be fed back into the system (state-feedback system), so that

u\left(t\right) = x\left(t\right)

Thus, the original equation becomes

\dot{x}\left(t\right) = ax\left(t\right) + a_{d}x\left(t-\tau\right)

If we assume a solution of the form x\left(t\right) = Ce^{st}, the equation becomes

Cse^{st} = aCe^{st} + a_{d}Ce^{s\left(t-\tau\right)}

Collecting terms and dividing out C

e^{st}\left(s - a - a_{d}e^{-s\tau}\right) = 0

Clearly e^{st} cannot equal zero, thus the portion of the equation in the parentheses must equal zero. Thus, after re-arranging,

s-a = a_{d}e^{-s\tau}

\left(s-a\right)e^{s\tau} = a_{d}

\left(s-a\right)\tau e^{\left(s-a\right)\tau} = a_{d}\tau e^{-a\tau}

Thus, using the Lambert W function,

\left(s-a\right)\tau = W_{k}\left(a_{d}\tau e^{-a\tau}\right)

Stability of the differential equation is thus determined by using the principal branch of the Lambert W function.

s = \dfrac{1}{\tau}W_{0}\left(a_{d}\tau e^{-a\tau}\right) + a

The Lambert W function for a scalar can be evaluated in Matlab with the following commands.

lambertw(k,expression), or in this case,

s = \dfrac{1}{\tau}\mbox{lambertw}\left(0,a_{d}\tau e^{-a\tau}\right) + a

This process for a system of delay differential equations is the same as that for a scalar delay differential equation (as shown in the first Lambert W post). However, the difference lies in the evaluation of the Lambert W function for a matrix as opposed to a scalar. More on the DDE systems in the next post.

Delay Differential Equations and the Lambert W Function

Delay differential equations are natural to study since most systems involve a delay from the input to the output (accelerators, computers, etc.). On a more nerdy level they involve some pretty interesting math so let’s take a look.

Consider the scalar, linear, pure delay differential equation:

\dot{x}\left(t\right) = a_{d}x\left(t-\tau\right)

This type of equation is associated with a system where the output is equivalent to the input delayed by a small time constant \tau and multiplied by a system constant a_d. In studying the stability of differential equations, we want to know whether they will behave in one of two ways. Either the solution to the differential equation approaches infinity as time approaches infinity (unstable) or the solution approaches some constant as time approaches infinity (stable). To do this, let’s take the Laplace transform of the DDE.

sX(s) - x(0) = a_{d}e^{-s\tau}X(s)

Collecting the terms and assuming the initial condition to be zero, we arrive at the following equation.

X(s)\left(s - a_{d}e^{-s\tau}\right) = 0

To avoid a trivial solution, only the part of the equation in the parentheses can equal zero. Therefore,

s - a_{d}e^{-s\tau} = 0

Let’s introduce the Lambert W function. This function satisfies the equation:

Ye^Y = X

The solution to this equation is:

Y_k = W_{k}\left(X\right)

In other words,

W_{k}\left(X\right) e^{W_{k}\left(X\right)} = X

where W_{k}\left(X\right) is the Lambert W function of X and k is the branch number. This equation is called a transcendental equation since there are infinite values that satisfy this equation. The Lambert W function has infinite branches (similar to \mbox{tan}^{-1} or \mbox{sin}^{-1}), meaning there are infinite values that will satisfy the equation above. Additionally, it can be shown that the maximum values yielded by the Lambert W function are given by the principal branch (i.e. k=0).

Returning to the Laplace transform equation,

s - a_{d}e^{-s\tau}=0

The roots of this equation determine the stability of the DDE. Therefore, we solve for the values that make this equation zero.

s = a_{d}e^{-s\tau}

Multiplying both sides by \tau e^{s\tau} yields:

\underbrace{s\tau}_Y e\underbrace{^{s\tau}}_Y = \underbrace{a_{d}\tau}_X

Clearly, this equation is a candidate for using the Lambert W function. So, for this simple function, it can be seen that the roots for the DDE are given as:

s_{k} = \dfrac{1}{\tau}W_{k}\left(a_{d}\tau\right)

There are infinite values that satisfy this equation; however, since the maximum values for the Lambert W function are given by the principal branch, the only branch that need be evaluated is the principal branch. In other words, if the maximum value for the DDE roots is negative, then all the rest of the values are guaranteed negative. Therefore, for stability

s_{0} = \dfrac{1}{\tau}W_{0}\left(a_{d}\tau\right) < 0

This is an important result for understanding delay systems. However, the study of delays in systems of differential equations is much more difficult and remains an open problem in the field of dynamics and control systems. For example consider the DDE system

{\bf x}\left(t\right) = {\bf A}_{d}{\bf x}\left(t-\tau\right)

where {\bf x}\left(\cdot\right) is a vector and {\bf A}_{d} is a matrix. Stay tuned for more!