As a side note, I feel that many engineering students are never introduced, formally, to Engineering Numerical Methods. In many cases, not having an adequate background in Numerical Methods results in problems troubleshooting solutions or a lack of understanding of the basic mathematics of solving Engineering problems. I hope that you will attain the skills you need to solve your Engineering problems efficiently and accurately with my Numerical Methods for Engineering page.
This page is representative of what I believe to be the most effective and common methods of solving problems (I like avoiding BS).
Here is what I'll Cover:
- Matrix Methods (solving systems of equations)
- Simultaneous Linear Equations
- Naive Gauss Elimination
- LU decomposition
- Solutions to non-linear systems of equations
- Newton's Linear Method
- Eigenvalue/Eigenvector Problems
- Classic Method
- Numerical Integration
- Rectangle Method
- Trapezoid method
- Simpson's 1/3 Method
- Numerical Differentiation
Matrix Methods
What's the point of using matrices? Well, every engineering problem is represented as an equation or a system of equations. When we have a system of equations, we have a system of variables that need to be solve. The 'solved variables' represent the solution to our problem. We may have something like this:
Each equation can be written in the matrix form as follows:
A represents the matrix of coefficients in equations 1, 2, and 3. The linear vector or matrix of variables (x) is shown above as well. Let's move forward to solving this system of linear equations.
Simultaneous Linear Equations
Naive Gauss EliminationThe idea behind solving a system of equations is to eliminate unknowns. The goal is to get the [A] Matrix into a upper or lower triangular matrix so that number of equations and variables are equal. The following picture demonstrates what we are trying to accomplish.
Here we can see that a values in the [A] matrix are being eliminated in each iteration (step) of 'upper triangularizing' the [A] matrix. The final matrix is the upper triangular matrix we want. So, how is this accomplished using Naive Gauss Elimination?
Well, this method is called naive because it does not precondition the matrix my pivoting row or columns, it also doesn't allow for 'selective harvesting' or eliminating of individual entries of [A] to make our life easier. The first step is to eliminate all x1 entries below the first row (as shown in the above figure). This is accomplished by first doing forward iteration as follows.
For equation 2 (second row) divide equation 1 (row one) by a11 and multiply by a21.
Now, subtract this from row 2 to eliminate the first entry. For the [3x3] case, the only remaining entries will be a22 and a23. This process must be repeated for each respective row until the matrix is fully reduced to the upper-triangular form. Note that entries that operate as a denominator must not be zero. In other words, we should try to avoid reducing a matrix with a determinate equal to or close to zero.
Let's look at an example (USF) of the entire process.
Back Substitution is used in this example to solve for the unknowns in the reduced matrix. The example demonstrates the ability for this method to be preformed by a machine as it requires no external knowledge regarding row/column switching. The method is a bit cumbersome and falls apart when entries are zero.
LU Decomposition
LU Decomposition is my favorite method for solving systems of linear, simultaneous systems of equations. This is also the only reason I am teaching 2 methods of solving systems of equations. LU decomposition involves decomposing a matrix into upper and lower triangular matrices and uses forward and back substitution to find the solutions to the system.
The main idea is that L and U can be computed without solving the simultaneous equations. When the LU decomposition can be found, the [A]{x}=b problem may be solved using two steps. Note that [A]{x}=[L][U]{x}={b}. Therefore, we can say [L]{y}={b} and [U]{x}={y}. Where y is the first set of solutions to be plugged into [L]{y}={b}.
Here is the pattern:
Each aij value is found by going through [A] row by row. This method is, in effect, modifying one matrix into two. This is called 'triangularization.' The advantage to this method is that for larger or "more sparse" matrices, LU decomposition becomes more computationally efficient as less overall steps are used and decimal point arithmetic errors are reduced. The above example shows the general method of LU decomposition, and solving larger matrices.
In fact LU decomposition may be done for larger computational processes by means of solving localized equations and plugging the decomposed matrices into a larger matrix. This process allows the programmer to save computational steps and call them at a later time, if the global (sum of the small matricies) matrix is solved.
Solutions to non-linear systems of equations
Newton's Linear methodIn many instances, equations may be in non linear or polynomial form. If a solution is to be found, one must fin the roots. If an equation is a second order equation, than the quadratic equation can simply be used to find the root(s). However if the equation is higher than a second order equation or there are multiple simultaneous equations, we should employ a numerical method to avoid the headache in finding root(s).
The most common method of root finding is Newton's Linear method. Recall from calculus that the first derivative of a function is the slope of the line (1d case) or plane (2d case). If we want to find the roots of a function we can employ the first derivative of the function and the function value at a point. If we sum the product of the first derivative with the convergence bounds (the resolution with which you want the root(s) to lie) and the function value at a point they will sum to zero when a root is reached. In other words, the root is at an axis intercept.
The first 'guess' is denoted as x0 for the above 1D case, this process will be iterative within the user's desired bounds. The equation will then take the form for the 1D or single variable case.
When we become interested in systems of non-linear equations, we must employ Jacobian. The Jacobian is an organization of first-order partial derivatives. The Jacobian Matrix is extremely useful in numerical methods and in Continuum Mechanics. The Jacobian is really the rate or change in length of a spacial matrix. If we looked at a 3D shape intersected with a 2D plane, the Jacobian at a point on the shape would represent spacial orientation of the plane. In matrix form the Jacobian is:
Now, if we want to solve simultaneous, non-linear equations we can employ the Jacobian similarly to the first derivative in the 1D root finding case.
Where xn+1 is unknown. Here the Jacobian is a function of the current value of the spacial coordinate xn. We can have a computer continually enter current values into the iterative equation until a tolerance of solution is reached. Here are the steps of Newton's method.
- Given a function or system of simultaneous functions
- Compute first derivative of the function or Jacobian of the square matrix
- Employ Newtons formulas
- Insert first value of dependent variable(s)
- Find first root (xn+1)
- let xn+1 = xn for the subsequent iterations
- Repeat until convergence is reached within tolerance
Eigenvalue/Eigenvector Problems
Firstly, I need to explain what Eignevalues and Eigenvectors are. Most engineers have no idea what they are or their significance in computational processes. Eigenvectors are vectors that multiply a square matrix and remain parallel to the original matrix, the corresponding Eigenvalue is the scalar the represents the principal value of the matrix (for example principal stresses, stretches and vibrational displacements). In other words, an eigenvector is a direction for the principal value.
The characteristic polynomial is the classical method of determining Eigenvalues. The "Eigenvalue Problem" takes the form:
Where A is a square matrix, I is the identity matrix and lambda is the eigenvalue associated with on dimension. The form will be as follows:
Where the equation is in the polynomial form and his n roots. This polynomial is called the "Characteristic Polynomial" of a square matrix and the order of the polynomial is denoted by the highest power in the polynomial; the highest power is n. Solving the roots of the polynomial can be done using Newton's method, as illustrated in this page. The roots of the polynomial are the eigenvalues of the square matrix.
Eigenvectors are the directions of the "stretched" eigenvalues. In other words,an eigenvector it the direction of the diagonalized matrix (the corresponding column). The eigenvector is in the same direction of the original square matrix direction. An eigenvalue and corresponding eigenvector form an eigenpair. The following is an example demonstrating the determination of eigenpairs.
Where lamba(s) represent eigenvalues and x, y and z are spacial coordinates in the eigenvectors v. If non-unique eigenvectors arise, creating a dummy vector space is applicable. Fpr instance, if you are trying to find the second eigenvector and it is not unique, you may use the vector < 0, 1, 0 > to solve for the corresponding eigenvector.
There are other methods of finding eigenvectors for large matrices (Householder/QR method, NASA method, etc.), however for simplicity this will always work. Note that the "Classical Method" is not particularly computationally efficient.
Numerical Integration
Rectangle MethodIntegration by means of separating a space by 2D or 3D rectangles is extremely straight forward. I picture is worth a thousand words:
The overall error of the integration estimation is a function of the step size taken. Smaller step sizes result in less error.
Trapezoidal Integration
In an attempt to reduce the error of integration estimation, trapezoidal integration may be used. Similar to rectangular integration:
The error is slightly reduced and the integration formula is as follows:
Simpson's rule is based off a polynomial area approximation of any order larger than one (only whole integers). The order of the polynomial is the highest exponent observed in the polynomial function. Between two points a polynomial of some order is assumes and an algebraic expression for the area is formed on the basis of the polynomial order. The below picture illustrates the approximation:
For an order two Simpson approximation, the following formula may be used:
Remember, it is the summations of piecewise integrals that evaluates your function between desired bounds.
Numerical Differentiation
Derivatives may be viewed as the slope between corresponding points in a function (1D case). Therefore if we want to approximate the slope we simply need to know the step size and change in function values at corresponding points. The following depicts an approximate derivative:
Above we see an approximation of the derivative of a function between the function variable values of x(k-1) and x(k+1); this approximation is the derivative at xk. Therefore the basic formula for a two point approximate derivative in 1D is:
For a three point approximation in 1D, the derivative approximation is:
This method applies for partial derivatives as well. The variable is simply taken in the function values for the respective dependent variable and evaluated. The numerical derivative is the computed using the above equation(s). Again, the accuracy of these methods is dependent on the step size.
Conclusion
Numerical Methods in Engineering is an extensive topic to be covered. Almost everything done relies on basic principles of applied mathematics. Vibrational and impact problems in structural mechanics often require use of Fourier Series Analysis, heat transfer may require advanced techniques in solving elliptic integrals... the list goes on.
This page covered some basic topics in calculus and linear algebra. However, when it is necessary to solve complicated problems (ODE's and PDE's) in complicated geometries numerical techniques must be employed (Finite Element Method for example). Numerical techniques are extensively researched and funded. I would like to extend this page to include different series methods of analysis, Gauss Quadrature and advanced methods of solving eigen problems. Additionally, a page on the Finite Element Method would be fun (of course, I would need to include a page of Continuum Mechanics to back it up!).