Monthly Archives: June 2012

Sieve Analysis

The first method of mechanical analysis is sieve analysis. It is pretty straight forward and involves straining a soil sample through a series of sieves. Generally the sieves are stacked as seen in the above diagram with a US Standard Sieve No. 4 at the top (This sieve size has a 4.75 mm opening) and a Sieve No. 200 at the bottom (with a 0.075 mm opening). The stack usually consists of 5 to 7 sieves and has a pan at the bottom. As you can see, the opening in the sieve gets smaller as you go down. Therefore larger particles will be captured while smaller particles will pass through. This allows us to measure how much soil of each size was in the original soil sample.

A few important steps:

  • the soil sample needs to be oven-dried and all clumps broke.
  • the weight of the empty sieves needs to be obtained.

Once this is done, the soil sample is placed into the top sieve, the lid is placed on the stack and the stack is placed in a mechanical shaker. The stack will then be shaken vigorously for approximately 5 minutes to ensure proper distribution of the particles.

The next step is to calculate what is called the percent passing or the percent finer. This is simply the percentage of the entire soil sample that passed through a particular sieve. For example, the percent passing the No. 200 sieve would be the mass of the soil in the bottom pan (since that is the soil that passed through the No. 200 sieve) divided by the total mass of the original soil sample. So in general we can write:

where

  • %P is the percent passing
  • i represents the ith sieve
  • n represents the number of sieves
  • m is the mass

Once the percent passing is determined for each sieve, a plot is made of sieve or particle size versus percent passing as seen below. The percent passing is on the ordinate and the sieve or particle size is on the abscissa. Also notice the abscissa is reverse logarithmic.

In future posts we will look more carefully at what information this distribution curve can provide, but for now we are highlighting the process. If you have any questions, please let me know in the comments.

Related Posts:

Specific Gravity of Soil

Soil Moisture Content

Systems of Delay Differential Equations and the Lambert W Function

The evaluation of the Lambert W function for matrices must be understood in order to move forward in our understanding of solving DDE systems. For the DDE system given as

\dot{\bf x}\left(t\right) = {\bf A}_d{\bf x}\left(t-\tau\right)

The eigenvalues (in this case we will use s to denote the eigenvalues instead of the more common \lambda) of the system can be found as:

\mbox{det}\left(s{\bf I} - \dfrac{1}{\tau}W_{k}\left({\bf A}_{d}\tau\right)\right) = 0

Functions of matrices can generally be evaluated using the following relationship

F\left({\bf A}\right) = {\bf V}F\left({\bf J}\right){\bf V}^{-1}

where {\bf A} is a matrix, {\bf V} is a matrix of eigenvectors, and {\bf J} is the Jordan canonical form of the matrix {\bf A}. The Jordan canonical form of a matrix contains all the eigenvalues of the matrix and accounts for any repeated eigenvalues. As a quick example, consider a matrix {\bf A} with eigenvalues \lambda_{1} and \lambda_{2} and let’s assume that \lambda_{1} is repeated once and \lambda_{2} is repeated twice. This means that, since there are two repeated eigenvalues, there are two Jordan blocks. Then, {\bf J} can be written as

{\bf J} = \left[\begin{array}{ccccc}  \lambda_{1} & 1 & 0 & 0 & 0\\  0 & \lambda_{1} & 0 & 0 & 0\\  0 & 0 & \lambda_{2} & 1 & 0\\  0 & 0 & 0 & \lambda_{2} & 1\\  0 & 0 & 0 & 0 & \lambda_{2}  \end{array}\right] = \left[\begin{array}{cc} {\bf J}_{1} & {\bf 0}\\  {\bf 0} & {\bf J}_{2}\end{array}\right]

{\bf J}_{1} = \left[\begin{array}{cc}  \lambda_{1} & 1\\  0 & \lambda_{1}\end{array}\right]

{\bf J}_{2} = \left[\begin{array}{ccc}  \lambda_{2} & 1 & 0\\  0 & \lambda_{2} & 1\\  0 & 0 & \lambda_{2}\end{array}\right]

Each Jordan block accounts for a single repeated eigenvalue. Now, the expression F\left({\bf J}\right) is

F\left({\bf J}\right) = \left[\begin{array}{ccccc}  F\left(\lambda_{1}\right) & F'\left(\lambda_{1}\right) & 0 & 0 & 0\\  0 & F\left(\lambda_{1}\right) & 0 & 0 & 0\\  0 & 0 & F\left(\lambda_{2}\right) & F'\left(\lambda_{2}\right) & \frac{1}{2}F''\left(\lambda_{2}\right)\\  0 & 0 & 0 & F\left(\lambda_{2}\right) & F'\left(\lambda_{2}\right)\\  0 & 0 & 0 & 0 & F\left(\lambda_{2}\right)  \end{array}\right]

Thus, in general, the function of a single Jordan block (for a single repeated eigenvalue \lambda) can be written as

F\left({\bf J}\right) = \left[\begin{array}{ccccc}  F\left(\lambda\right) & F'\left(\lambda\right) & \frac{1}{2}F''\left(\lambda\right) & \cdots & \frac{1}{\left(n-1\right)!}F^{\left(n-1\right)}\left(\lambda\right)\\    0 & F\left(\lambda\right) & F'\left(\lambda\right) & \cdots & \frac{1}{\left(n-2\right)!}F^{\left(n-2\right)}\left(\lambda\right)\\    \vdots & \ddots & \ddots & \ddots & \vdots\\    {} & {} & {} & F\left(\lambda\right) & F'\left(\lambda\right)\\    0 & \cdots & \cdots & \cdots & F\left(\lambda\right)  \end{array}\right]

where n is the duplicity of the eigenvalue. If there are no repeated eigenvalues, the matrix {\bf J} becomes simple diagonal matrix with the eigenvalues on the diagonal (and zeros everywhere else). Therefore, since the Lambert W function is a function (same as sin or cos), the Lambert W of {\bf A} can be written as

W\left({\bf A}\right) = {\bf V}W\left({\bf J}\right){\bf V}^{-1}

Differentiating the defining equation x = W\left(x\right)e^{W\left(x\right)} for W and solving for W' (implicit differentiation), we obtain the following expression for the derivative of W

W'\left(x\right) = \dfrac{1}{\left(1 + W\left(x\right)\right)e^{W\left(x\right)}}
W'\left(x\right) = \dfrac{W\left(x\right)}{x\left(1 + W\left(x\right)\right)}, \mbox{ for } x\neq 0

Further derivatives of W can be taken and a general formulation of the nth derivative of W can be found in the paper “On the Lambert W Function” by Corless, et al. There is a Matlab function that will evaluate the Lambert W function for matrices (lambertwm), however you must search for it as it was written by a couple of doctoral students at Michigan.

One quick note before we move on to the current problems encountered with DDE systems. For the equation

{\bf Y}e^{\bf Y} = {\bf X},

where {\bf X} and {\bf Y} are matrices, the solution, {\bf Y} = W_{k}\left({\bf X}\right), is valid only when {\bf X} and {\bf Y} commute; i.e. when {\bf XY} = {\bf YX}. The proof is quite simple and satisfactory. First, assume that {\bf Y}e^{\bf Y} = {\bf X}. It follows that

{\bf XY} = {\bf Y}e^{\bf Y}{\bf Y} = {\bf Y}^2e^{\bf Y} = {\bf YX}

since {\bf Y} will always commute with its own matrix exponential, e^{\bf Y}. Thus, it can be seen that the two matrices satisfy the original exponential equation if and only if they commute, and thus satisfying the Lambert W function logically follows.

The next type of system that we will encounter is a DDE system with both a delay term and a non-delay term.

{\bf x}\left(t\right) = {\bf Ax}\left(t\right) + {\bf A}_d{\bf x}\left(t-\tau\right)

More on this to come. Stay tuned!

Delay Differential Equations and the Lambert W Function Continued…

In order to get a more firm grasp on the Lambert W function and how it can be used to solve delay differential equations, let’s consider a system with a scalar state and a delay on the input to the system.

\dot{x}\left(t\right) = ax\left(t\right) + a_{d}u\left(t-\tau\right)

In this case u\left(t\right) is the system input. In traditional differential equation language, this is a non-homogeneous delay differential equation (similar to the first post but this time there is also a non-delay term). So, let’s have the state be fed back into the system (state-feedback system), so that

u\left(t\right) = x\left(t\right)

Thus, the original equation becomes

\dot{x}\left(t\right) = ax\left(t\right) + a_{d}x\left(t-\tau\right)

If we assume a solution of the form x\left(t\right) = Ce^{st}, the equation becomes

Cse^{st} = aCe^{st} + a_{d}Ce^{s\left(t-\tau\right)}

Collecting terms and dividing out C

e^{st}\left(s - a - a_{d}e^{-s\tau}\right) = 0

Clearly e^{st} cannot equal zero, thus the portion of the equation in the parentheses must equal zero. Thus, after re-arranging,

s-a = a_{d}e^{-s\tau}

\left(s-a\right)e^{s\tau} = a_{d}

\left(s-a\right)\tau e^{\left(s-a\right)\tau} = a_{d}\tau e^{-a\tau}

Thus, using the Lambert W function,

\left(s-a\right)\tau = W_{k}\left(a_{d}\tau e^{-a\tau}\right)

Stability of the differential equation is thus determined by using the principal branch of the Lambert W function.

s = \dfrac{1}{\tau}W_{0}\left(a_{d}\tau e^{-a\tau}\right) + a

The Lambert W function for a scalar can be evaluated in Matlab with the following commands.

lambertw(k,expression), or in this case,

s = \dfrac{1}{\tau}\mbox{lambertw}\left(0,a_{d}\tau e^{-a\tau}\right) + a

This process for a system of delay differential equations is the same as that for a scalar delay differential equation (as shown in the first Lambert W post). However, the difference lies in the evaluation of the Lambert W function for a matrix as opposed to a scalar. More on the DDE systems in the next post.

Least Squares Linear Regression

Least-squares regression is a methodology for finding the equation of a best fit line through a set of data points. It also provides a means of describing how well the data correlates to a linear relationship. An example of data with a general linear trend is seen in the above graph. First, we will go over the derivation of the formulas from theory and then I have also appended at the end of this post Scilab code for implementation of the algorithm.

The equation of a line through a data point can be written as:

The value of any data points that are not directly on the line but are in the proximity of the line can be given by:

Where e is the vertical error between the y-value given by the line and the actual y-value of the data. The goal would be to come up with a line which minimizes this error. In least-squares regression, this is accomplished  by minimizing the sum of the squares of the errors. The sum of the squares of the errors is given by:

In order to minimize this value, the minimum finding techniques of differential calculus will be used. First take the derivative with respect to the slope.

Then with respect to the y-intercept yields:

Which can be substituted in the previous equation to solve for the slope.

The y-intercept is then:

It can be seen that these last two formulas only require knowledge about the data point coordinates and the number of points and the equation for the least squares linear regression line can be found.

Finally, below is the Scilab code implementation.

//the linear regression function takes x-values and
//y-values of data in the column vectors 'X' and 'Y' and finds
//the best fit line through the data points. It returns
//the slope and y-intercept of the line as well as the
//coefficient of determination ('r_sq').

//the function call for this should be of the form:
//'[m,b,r2]=Linear_Regression(x,y)'
function [slope, y_int, r_sq]=Linear_Regression(X, Y)
    //determine the number of data points
    n=size(X,'r');

    //initialize each summation
    sum_x=0;
    sum_y=0;
    sum_xy=0;
    sum_x_sq=0;
    sum_y_sq=0;

    //calculate each sum required to find the slope, y-intercept and r_sq
    for i=1:n
        sum_x=sum_x+X(i);
        sum_y=sum_y+Y(i);
        sum_xy=sum_xy+X(i)*Y(i);
        sum_x_sq=sum_x_sq+X(i)*X(i);
        sum_y_sq=sum_y_sq+Y(i)*Y(i);
    end

    //determine the average x and y values for the
    //y-intercept calculation
    x_bar=sum_x/n;
    y_bar=sum_y/n;

    //calculate the slope, y-intercept and r_sq and return the results
    slope=(n*sum_xy-sum_x*sum_y)/(n*sum_x_sq-sum_x^2);
    y_int=y_bar-slope*x_bar;
    r_sq=((n*sum_xy-sum_x*sum_y)/(sqrt(n*sum_x_sq-sum_x^2)*sqrt(n*sum_y_sq-sum_y^2)))^2;

    //determine the appropriate axes size for plotting the data and
    //linear regression line
    axes_size=[min(X)-0.1*(max(X)-min(X)),min(Y)-0.1*(max(Y)-min(Y)),max(X)+0.1*(max(X)-min(X)),max(Y)+0.1*(max(Y)-min(Y))];

    //plot the provided data
    plot2d(X,Y,style=-4,rect=axes_size);

    //plot the calculated regression line
    plot2d(X,(slope*X+y_int));
endfunction

I hope this proves helpful. Let me know in the comments if you have any questions.

Related Posts:

Maximizing and Minimizing

The Bisection Method Using Scilab

Structural Finite Element Analysis Software Installation


			

Delay Differential Equations and the Lambert W Function

Delay differential equations are natural to study since most systems involve a delay from the input to the output (accelerators, computers, etc.). On a more nerdy level they involve some pretty interesting math so let’s take a look.

Consider the scalar, linear, pure delay differential equation:

\dot{x}\left(t\right) = a_{d}x\left(t-\tau\right)

This type of equation is associated with a system where the output is equivalent to the input delayed by a small time constant \tau and multiplied by a system constant a_d. In studying the stability of differential equations, we want to know whether they will behave in one of two ways. Either the solution to the differential equation approaches infinity as time approaches infinity (unstable) or the solution approaches some constant as time approaches infinity (stable). To do this, let’s take the Laplace transform of the DDE.

sX(s) - x(0) = a_{d}e^{-s\tau}X(s)

Collecting the terms and assuming the initial condition to be zero, we arrive at the following equation.

X(s)\left(s - a_{d}e^{-s\tau}\right) = 0

To avoid a trivial solution, only the part of the equation in the parentheses can equal zero. Therefore,

s - a_{d}e^{-s\tau} = 0

Let’s introduce the Lambert W function. This function satisfies the equation:

Ye^Y = X

The solution to this equation is:

Y_k = W_{k}\left(X\right)

In other words,

W_{k}\left(X\right) e^{W_{k}\left(X\right)} = X

where W_{k}\left(X\right) is the Lambert W function of X and k is the branch number. This equation is called a transcendental equation since there are infinite values that satisfy this equation. The Lambert W function has infinite branches (similar to \mbox{tan}^{-1} or \mbox{sin}^{-1}), meaning there are infinite values that will satisfy the equation above. Additionally, it can be shown that the maximum values yielded by the Lambert W function are given by the principal branch (i.e. k=0).

Returning to the Laplace transform equation,

s - a_{d}e^{-s\tau}=0

The roots of this equation determine the stability of the DDE. Therefore, we solve for the values that make this equation zero.

s = a_{d}e^{-s\tau}

Multiplying both sides by \tau e^{s\tau} yields:

\underbrace{s\tau}_Y e\underbrace{^{s\tau}}_Y = \underbrace{a_{d}\tau}_X

Clearly, this equation is a candidate for using the Lambert W function. So, for this simple function, it can be seen that the roots for the DDE are given as:

s_{k} = \dfrac{1}{\tau}W_{k}\left(a_{d}\tau\right)

There are infinite values that satisfy this equation; however, since the maximum values for the Lambert W function are given by the principal branch, the only branch that need be evaluated is the principal branch. In other words, if the maximum value for the DDE roots is negative, then all the rest of the values are guaranteed negative. Therefore, for stability

s_{0} = \dfrac{1}{\tau}W_{0}\left(a_{d}\tau\right) < 0

This is an important result for understanding delay systems. However, the study of delays in systems of differential equations is much more difficult and remains an open problem in the field of dynamics and control systems. For example consider the DDE system

{\bf x}\left(t\right) = {\bf A}_{d}{\bf x}\left(t-\tau\right)

where {\bf x}\left(\cdot\right) is a vector and {\bf A}_{d} is a matrix. Stay tuned for more!

Mesh Circuit Analysis

Recently I had a reader comment with a request for an example of Kirchoff’s Voltage Law (KVL) with Ohm’s Law. KVL and Ohm’s Law are both used in the circuit analysis methodology called Mesh Circuit Analysis. Consequently I thought I would analyze a small circuit and demonstrate the general principles of this technique that can be used on any circuit.

The circuit we will use below has a 10 volt source and 4 resistors configured as seen.

 

STEP 1:

 

The first step is to label the circuit as I have done in red ink. This includes the current flowing clockwise through each loop (or mesh) and the voltage drops across each passive element. (In this case all the passive elements are resistors.) Next we will use the picture to write some equations.

 

STEP 2:

If you look at the diagram at the first loop and start at the top, then travel around the loop going clockwise and add up the voltage drops you get the first equation below. Notice the voltage drop across the source is negative because it is not a drop it is an increase.Also notice the sum must equal zero.

The next 2 lines are simply writing the unknown voltage drops in terms of Ohm’s Law. The voltage drop across the resistor is equal to the current through the resistor times the resistance. Notice for resistor 2 the currents are opposing one another. In other words current 1 runs from the top of the circuit toward the bottom of the circuit through resistor 2. Current 2 runs from the bottom to the top. That is why the current component in the second Ohm’s Law equation is i1 minus i2.

Next these two Ohm’s Law equations are substituted in the original equation which gives the 4th line below.

From the 4th line the rest is just algebra. Then do the exact same process for loop 2 as seen above. This gives 2 equations with 2 unknowns. Of course these can be solved by any linear algebra method you prefer.

STEP 3:

Below I solved it using simple substitution. This solves for the unknown currents.

 

STEP 4:

Finally, substitute these found currents back into the Ohm’s Law equations we developed earlier and you find the unknown voltage drops. The last diagram shows the final solution.

 

I hope this answers my reader’s question and has been helpful. Please let me know if anyone needs more help.