Topics in Computational Mechanics: Part 4

Finite element analysis of an aircraft’s bearing bracket (source: SimScale)

The geometries that we considered so far are simple. If we are to analyze a complex, real-world structure, we need a way to accurately model its geometry. Not to worry, we can map a simple element to a complex one using a function.

Today, we will study the isoparametric approach. It tells us how to define the function so that we can evaluate integrals fast and achieve optimal rate of convergence. It is key to making FEM practical and powerful.

1. Isoparametric Approach

We begin with an element in the parametric space. The parametric space is named so because the element gets to take a simple shape—one that we can easily define with parameters. For example, we use a line segment in 1D, a triangle or square in 2D, and a tetrahedron or cube in 3D. The element also provides nodes—special points that let us define a basis of functions. Because we will use this element to create other elements, we refer to it as the parent domain.

Typical elements in 1D, 2D, and 3D
Typical elements in 1D, 2D, and 3D. (source)

Once we have such element in the parametric space, we can create a function to map that element into the physical space. The physical space is where the structure of our interest lives. Because we mapped (transformed) the element, the element can take a complex shape in the physical space.

Thanks to the idea of parametric and physical spaces, we can build complicated geometries easily. Both rectangular and circular bars above came from the same “stack of boxes” in the parametric space.
Thanks to the idea of parametric and physical spaces, we can build complicated geometries easily. Both rectangular and circular bars above came from the same “stack of boxes” in the parametric space.

For our study, we assume that the function is bijective, i.e. the inverse function exists. The inverse allows us to map the complex element (in the physical space) back to the simple element (in the parametric space). This ability becomes crucial for integrations. Note that, for certain problems, the inverse may not exist everywhere, or may collapse into a singularity. Dealing with numerical instability is topic for another day.

There is more than one way to map the parametric space to the physical space. This is where isoparametric approach comes in. It tells us to use the same basis functions to achieve two goals: (1) map the parent domain to an element in the physical space, i.e. describe the geometry; and (2) define the FE solution inside the element, i.e. describe the field that we are interested in.

Furthermore, an isoparametric approach ensures convergence. As we use more and more elements to describe the geometry (i.e. decrease the element size), we expect the FE solution to converge to the exact solution, point-wise (strong sense) or in some weak sense. The sufficient conditions for convergence are to use the right basis—one that meets the level of smoothness required by the weak form, and forms a partition of unity—and to follow the isoparametric approach.

Since 2005, the isogeometric approach (IGA) has been pursued by both academic and engineering communities. Here, B-splines, NURBS, and T-splines, which are readily available in CAD (computer-aided design), serve as basis for the geometry and fields. To highlight the fact that the geometry is represented exactly with B-splines, NURBS, and T-splines—versus approximately with polynomials—we call this isoparametric approach, isogeometric. IGA gives us competitive, if not better, results over the usual isoparametric approaches.

Next, we take a look at how to apply isoparametric approach to solve 1D elastostatics problems. We consider two types of 1D elements for a compare-and-contrast.

a. Two-noded, linear elements (1D)

Consider two linear functions N_{1} = N_{1}(\xi) and N_{2} = N_{2}(\xi), which are defined over the parent domain, the closed interval [-1,\,1].

Equation 1.1. Basis functions for the parent domain

\begin{array}{l} \displaystyle N_{1}(\xi) = -\frac{1}{2}\xi + \frac{1}{2} \\[16pt] \displaystyle N_{2}(\xi) = \frac{1}{2}\xi + \frac{1}{2} \end{array}

These functions are interpolatory. It means, we satisfy the equation N_{i}(\xi_{j}) = \delta_{ij} at the nodes \xi_{1} = -1 and \xi_{2} = 1. We can also check that N_{1} and N_{2} form a basis for the set of linear polynomials over the parent domain, i.e. any linear function over [-1,\,1] can be written as a linear combination of N_{1} and N_{2}.

Linear basis functions for the parent domain
Linear basis functions for the parent domain.

We need to create a linear function x = x(\xi) that can map the parent domain [-1,\,1] to any two-noded element e = [x_{1},\,x_{2}] in the physical space. Because the basis functions N_{1} and N_{2} are interpolatory, we deduce that the map is,

\begin{aligned} x(\xi) & \displaystyle = x_{1}N_{1}(\xi) + x_{2}N_{2}(\xi) \\[16pt] & \displaystyle = x_{1}\Bigl(-\frac{1}{2}\xi + \frac{1}{2}\Bigr) + x_{2}\Bigl(\frac{1}{2}\xi + \frac{1}{2}\Bigr) \end{aligned}

Note that the derivative of the map is a constant function,

\displaystyle\frac{dx}{d\xi}(\xi) = \frac{x_{2} - x_{1}}{2}

and, more importantly, that the derivative is positive for all \xi \in (-1,\,1).

Inverse Function Theorem (1D)

Let x = x(\xi) be a map from [a,\,b] to [c,\,d]. If the map is C^{k}(a,\,b) for some k \geq 1 and its Jacobian J(\xi) \equiv (dx/d\xi)(\xi) is positive for all \xi \in (a,\,b), then the inverse map \xi = \xi(x) exists and is C^{k}(c,\,d).

We can generalize this result to higher dimensions. The Jacobian is defined as the determinant of the Jacobian matrix.

Since the derivative is positive, the Inverse Function Theorem tells us that the inverse map \xi = \xi(x) exists, i.e. we can always return to the parent domain from any element in the physical space. We need this ability to calculate integrals fast and accurately. In general, we can’t write an explicit formula for the inverse map. Luckily, its existence is enough for us.

To follow the isoparametric approach, we define the basis functions N_{1}(x) and N_{2}(x) for the physical element to be the composite functions N_{1}(\xi(x)) and N_{2}(\xi(x)).

Equation 1.2. Basis functions for the element

\begin{array}{l} N_{1}(x) \equiv N_{1}(\xi(x)) \\[16pt] N_{2}(x) \equiv N_{2}(\xi(x)) \end{array}

Notice the abuse of notation. The N_{1} and N_{2} on the left-hand side refer to the basis for the element in the physical space, whereas those on the right refer to the linear basis for the parent domain. I didn’t use a different symbol for the physical basis because, from the context, it will always be clear which basis we are looking at.

Basis functions for an element in the physical space
Basis functions for an element in the physical space. This element has nodes at x_{1} = 1 and x_{2} = 3.

Lastly, we express the field of our interest in terms of the physical basis N_{1}(x) and N_{2}(x). In elastostatics, we want to know the displacement field u = u(x). Over the element e = [x_{1},\,x_{2}], we define the FE solution u_{h} to be,

u_{h}(x) = u_{1}N_{1}(x) + u_{2}N_{2}(x)

for some coefficients u_{1} and u_{2}.

Equation 1.3. Displacement field

u_{h}(x) = u_{1}N_{1}(x) + u_{2}N_{2}(x)

Please note that, by N_{1}(x) and N_{2}(x) in equation 1.3, we do not mean “construct linear, interpolatory basis functions for the physical element, just like we did for the parent domain.” What we really mean is, “find the value of \xi in the parent domain that will map to x, then evaluate N_{1}(\xi) and N_{2}(\xi).”

This distinction is hard to see for a linear basis, since the composite functions N_{1}(\xi(x)) and N_{2}(\xi(x)) happen to be linear polynomials in x. We will see with a quadratic basis that, in general, the basis functions for the physical element are not polynomials in x.

The element stiffness matrix K^{e} and the element load vector \vec{f}^{e} have integrals that are defined in the physical space. To evaluate them, we “pull back” to the parent domain and evaluate the integrals there. We can do this because the inverse map exists.

Recall that, for a truss-like bar under an axial load f = f(x), the entries of K^{e} and \vec{f}^{e} are given by,

\begin{array}{l} \displaystyle (K^{e})_{ij} = \int\limits_{x_{1}}^{x_{2}}\,E(x)A(x)\,\frac{dN_{i}}{dx}(x)\frac{dN_{j}}{dx}(x)\,\,dx \\[24pt] \displaystyle (\vec{f}^{e})_{i} = \int\limits_{x_{1}}^{x_{2}}\,f(x)\,N_{i}(x)\,\,dx \end{array}

In terms of the parametric space,

\begin{array}{l} \displaystyle\frac{dN_{i}}{dx} \equiv \frac{dN_{i}}{d\xi}\,\frac{d\xi}{dx} = \frac{1}{J(\xi)}\,\frac{dN_{i}}{d\xi} \\[24pt] \displaystyle dx \equiv \frac{dx}{d\xi}\,d\xi = J(\xi)\,d\xi \end{array}

Hence, a pullback to the parent domain results in the following integrals:

Equation 1.4. Element matrix and vector

\begin{array}{l} \displaystyle (K^{e})_{ij} = \int\limits_{-1}^{1}\,E(x(\xi))A(x(\xi))\,\frac{dN_{i}}{d\xi}(\xi)\frac{dN_{j}}{d\xi}(\xi) \cdot \frac{1}{J(\xi)}\,\,d\xi \\[24pt] \displaystyle (\vec{f}^{e})_{i} = \int\limits_{-1}^{1}\,f(x(\xi))\,N_{i}(\xi) \cdot J(\xi)\,\,d\xi \end{array}

where,

\begin{array}{l} \displaystyle N_{1}(\xi) = -\frac{1}{2}\xi + \frac{1}{2} \\[16pt] \displaystyle N_{2}(\xi) = \frac{1}{2}\xi + \frac{1}{2} \\[16pt] \displaystyle N_{1}'(\xi) = -\frac{1}{2} \\[16pt] \displaystyle N_{2}'(\xi) = \frac{1}{2} \\[20pt] x(\xi) = x_{1}N_{1}(\xi) + x_{2}N_{2}(\xi) \\[16pt] \displaystyle J(\xi) = \frac{x_{2} - x_{1}}{2} \end{array}

Note that the quantities x_{1}, x_{2}, E(x), A(x), and f(x) are data that we need to provide in an input file. A mesh generator may also have the final say on where the nodes x_{1} and x_{2} should be placed.

b. Three-noded, quadratic elements (1D)

Consider three quadratic functions N_{1} = N_{1}(\xi), N_{2} = N_{2}(\xi), and N_{3} = N_{3}(\xi) over the parent domain [-1,\,1]:

Equation 1.5. Basis functions for the parent domain

\begin{array}{l} \displaystyle N_{1}(\xi) = \frac{1}{2}\xi^{2} - \frac{1}{2}\xi \\[20pt] N_{2}(\xi) = -\xi^{2} + 1 \\[16pt] \displaystyle N_{3}(\xi) = \frac{1}{2}\xi^{2} + \frac{1}{2}\xi \end{array}

These functions satisfy the interpolatory property N_{i}(\xi_{j}) = \delta_{ij} for the nodes \xi_{1} = -1, \xi_{2} = 0, and \xi_{3} = 1. They also form a basis for the quadratic polynomials over [-1,\,1].

Quadratic basis functions for the parent domain
Quadratic basis functions for the parent domain.

We now use N_{1}, N_{2}, and N_{3} to quadratically map the parent domain to a three-noded element e = [x_{1},\,x_{3}] in the physical space (x_{2} is the node in-between):

x(\xi) = x_{1}N_{1}(\xi) + x_{2}N_{2}(\xi) + x_{3}N_{3}(\xi)

The derivative of the map is a linear function,

\displaystyle \frac{dx}{d\xi}(\xi) = x_{1}\Bigl(\xi - \frac{1}{2}\Bigr) + x_{2}\bigl(-2\xi) + x_{3}\Bigl(\xi + \frac{1}{2}\Bigr)

In order to use the Inverse Function Theorem, we must guarantee that the derivative (Jacobian) is positive for all \xi \in (-1,\,1).

We claim that, in order for the Jacobian J(\xi) = (dx/d\xi)(\xi) to be positive for all \xi, the middle node x_{2} must be placed within a quarter distance from the element’s center, (x_{1} + x_{3})/2. In other words, if h^{e} \equiv x_{3} - x_{1} denotes the size of the element, we must have,

\displaystyle x_{2} \in \Bigl(\frac{x_{1} + x_{3}}{2} - \frac{h^{e}}{4},\,\,\frac{x_{1} + x_{3}}{2} + \frac{h^{e}}{4}\Bigr)


To prove this, we only need to consider the case of x_{1} = 0 and x_{3} = 1. Stretching the element multiplies the Jacobian by some positive number, and translating it does not change the Jacobian.

Then,

\begin{aligned} J(\xi) & \displaystyle = 0 \cdot \Bigl(\xi - \frac{1}{2}\Bigr) + x_{2}\bigl(-2\xi) + 1 \cdot \Bigl(\xi + \frac{1}{2}\Bigr) \\[16pt] & \displaystyle = (1 - 2x_{2})\,\xi + \frac{1}{2} \end{aligned}

We already see that, if x_{2} = 0.5 (we place the middle node at the element’s center), then the Jacobian will be positive for all \xi \in (-1,\,1). What if we wanted to put the middle node somewhere else?

Well, consider the case of x_{2} < 0.5, which means the quantity (1 - 2x_{2}) is positive. The fact that -1 < \xi < 1 implies the following inequality:

\displaystyle -(1 - 2x_{2}) + \frac{1}{2} \,<\, J(\xi) \,<\, (1 - 2x_{2}) + \frac{1}{2}

Since the Jacobian is positive for all \xi, if and only if, the lower bound of J(\xi) is positive, we must have,

x_{2} > 0.25

Finally, consider the case of x_{2} > 0.5, which means the quantity (1 - 2x_{2}) is negative. The bound -1 < \xi < 1 implies,

\displaystyle -(1 - 2x_{2}) + \frac{1}{2} \,>\, J(\xi) \,>\, (1 - 2x_{2}) + \frac{1}{2}

Set the lower bound of J(\xi) to be greater than zero to conclude that,

x_{2} < 0.75


We define the basis functions N_{1}(x), N_{2}(x), and N_{3}(x) for the physical element to be the composite functions N_{1}(\xi(x)), N_{2}(\xi(x)), and N_{3}(\xi(x)).

Equation 1.6. Basis functions for the element

\begin{array}{l} N_{1}(x) \equiv N_{1}(\xi(x)) \\[16pt] N_{2}(x) \equiv N_{2}(\xi(x)) \\[16pt] N_{3}(x) \equiv N_{3}(\xi(x)) \end{array}

Basis functions for an element in the physical space
Basis functions for an element in the physical space. This element has nodes at x_{1} = 1, x_{2} = 1.6, and x_{3} = 3.

Lastly, we define the FE solution u_{h} over the element e = [x_{1},\,x_{3}] using these basis functions. In other words,

u_{h}(x) = u_{1}N_{1}(x) + u_{2}N_{2}(x) + u_{3}N_{3}(x)

for some coefficients u_{1}, u_{2}, and u_{3}.

Equation 1.7. Displacement field

u_{h}(x) = u_{1}N_{1}(x) + u_{2}N_{2}(x) + u_{3}N_{3}(x)

Again, by N_{1}(x), N_{2}(x), and N_{3}(x) in equation 1.7, we don’t mean “construct quadratic, interpolatory basis functions for the physical element.” We really mean “find the value of \xi in the parent domain that will map to x, then evaluate N_{1}(\xi), N_{2}(\xi), and N_{3}(\xi).” To make this distinction clear, the next figure shows what would happen we did construct quadratic, interpolatory basis functions on the physical space.

The quadratic, interpolatory basis functions on the physical space, which are shown in dashed lines, differ from the composite functions, shown in solid lines.
The quadratic, interpolatory basis functions on the physical space, which are shown in dashed lines, differ from the composite functions, shown in solid lines.

In general, the basis functions are not quadratic polynomials in x, because they are a composite function of a quadratic polynomial in \xi and the inverse map \xi(x), which takes on a complicated form. The only time that they are quadratic in x—the only time when we may make the mistake to create interpolatory functions in the physical space—is when the middle node x_{2} = (x_{1} + x_{3})/2 is placed at the element’s center, just like how the node \xi_{2} = 0 was placed at the center of the parent domain. But this approach would severely limit the extent to which we can model the geometry.

For the truss problem, here are the entries of K^{e} and \vec{f}^{e}:

Equation 1.8. Element matrix and vector

\begin{array}{l} \displaystyle (K^{e})_{ij} = \int\limits_{-1}^{1}\,E(x(\xi))A(x(\xi))\,\frac{dN_{i}}{d\xi}(\xi)\frac{dN_{j}}{d\xi}(\xi) \cdot \frac{1}{J(\xi)}\,\,d\xi \\[24pt] \displaystyle (\vec{f}^{e})_{i} = \int\limits_{-1}^{1}\,f(x(\xi))\,N_{i}(\xi) \cdot J(\xi)\,\,d\xi \end{array}

where,

\begin{array}{l} \displaystyle \displaystyle N_{1}(\xi) = \frac{1}{2}\xi^{2} - \frac{1}{2}\xi \\[20pt] N_{2}(\xi) = -\xi^{2} + 1 \\[16pt] \displaystyle N_{3}(\xi) = \frac{1}{2}\xi^{2} + \frac{1}{2}\xi \\[16pt] \displaystyle N_{1}'(\xi) = \xi - \frac{1}{2} \\[20pt] N_{2}'(\xi) = -2\xi \\[16pt] \displaystyle N_{3}'(\xi) = \xi + \frac{1}{2} \\[20pt] x(\xi) = x_{1}N_{1}(\xi) + x_{2}N_{2}(\xi) + x_{3}N_{3}(\xi) \\[16pt] \displaystyle J(\xi) = x_{1}\Bigl(\xi - \frac{1}{2}\Bigr) + x_{2}\bigl(-2\xi) + x_{3}\Bigl(\xi + \frac{1}{2}\Bigr) \end{array}

Again, we must provide the quantities x_{1}, x_{2}, x_{3}, E(x), A(x), and f(x) in an input file. We must also place the node x_{2} within a quarter distance from the element’s center.

2. Numerical Integration

In Section 1, we showed that the element stiffness matrices and element load vectors are computed in the parent domain [-1,\,1]. All that remains is to evaluate the integrals in equations 1.4 or 1.8. Clearly, there is no integration formula that will work for all user-defined functions E(x), A(x), and f(x). We turn to numerical integration, also known as quadrature.

Given a continuous function f = f(x) that is defined over the interval [-1,\,1], we want to approximate the integral

\displaystyle\int\limits_{-1}^{1}\,f(x)\,\,dx

First, we sample the function at quadrature points x_{1},\,x_{2},\,\cdots,\,x_{q} \in [-1,\,1]. Evaluating the function is easy. Then, we can weigh the values by some numbers w_{1},\,w_{2},\,\cdots,\,w_{q}. Lastly, we combine the weighted values (i.e. take a linear combination) to approximate the integral:

\displaystyle\int\limits_{-1}^{1}\,f(x)\,\,dx \,=\, w_{1}f(x_{1}) + w_{2}f(x_{2}) + \,\cdots\, + w_{q}f(x_{q}) \,+\, \mbox{error}

Let’s now take a look at how to choose the quadrature points and the weights so that we are guaranteed to have a good approximation (have a small error).

a. Newton-Cotes quadrature

The idea behind Newton-Cotes quadrature is simple: if we can approximate f with a polynomial p of degree (q - 1), we can integrate the polynomial instead. Integrating a polynomial is easy.

We use the polynomial that interpolates f at the quadrature points, which are spaced evenly apart inside the domain (-1,\,1). We intentionally don’t place quadrature points at the endpoints x = -1 and x = 1, because the derivatives of the basis functions may be undefined at the element interfaces.

Therefore, let’s look for the polynomial p that satisfies the equations,

\begin{array}{c} p(x_{1}) \,=\, f(x_{1}) \\[16pt] p(x_{2}) \,=\, f(x_{2}) \\[16pt] \vdots \\[16pt] p(x_{q}) \,=\, f(x_{q}) \end{array}

Such a polynomial exists and is unique because we can create Lagrange interpolation basis functions N_{1},\,N_{2},\,\cdots,\,N_{q} (just like we did for the isoparametric approach):

p(x) = f(x_{1})N_{1}(x) + f(x_{2})N_{2}(x) + \,\cdots\, + f(x_{q})N_{q}(x)

We get,

\begin{aligned} \displaystyle\int\limits_{-1}^{1}\,f(x)\,\,dx & \approx\, \displaystyle \int\limits_{-1}^{1}\,p(x)\,\,dx \\[12pt] & \displaystyle = f(x_{1})\int\limits_{-1}^{1}\,N_{1}(x)\,\,dx + f(x_{2})\int\limits_{-1}^{1}\,N_{2}(x)\,\,dx + \,\cdots\, + f(x_{q})\int\limits_{-1}^{1}\,N_{q}(x)\,\,dx \\[20pt] & \equiv w_{1}f(x_{1}) + w_{2}f(x_{2}) + \,\cdots\, + w_{q}f(x_{q}) \end{aligned}

In summary, for Newton-Cotes quadrature, where we sample is evenly spaced apart in the open domain (-1,\,1). How much weight we assign each sample is determined by the integral of the corresponding Lagrange interpolation basis function.

The table below shows the quadrature scheme for q = 1,\,2,\,3,\,4.

\begin{array}{|c|cccc|c|} \hline & x_{1} & x_{2} & x_{3} & x_{4} & \\[4pt] q & w_{1} & w_{2} & w_{3} & w_{4} & \mbox{error} \\[4pt]\hline & 0 & & & & \\[4pt] \,\,\,1\,\,\, & 2 & & & & \displaystyle\frac{1}{3}\,f^{(2)}(\bar{x}) \\[8pt]\hline & -1/3 & 1/3 & & & \\[4pt] \,\,\,2\,\,\, & 1 & 1 & & & \displaystyle\frac{2}{9}\,f^{(2)}(\bar{x}) \\[8pt]\hline & -1/2 & 0 & 1/2 & & \\[4pt] \,\,\,3\,\,\, & 4/3 & -2/3 & 4/3 & & \displaystyle\frac{7}{720}\,f^{(4)}(\bar{x}) \\[8pt]\hline & -3/5 & -1/5 & 1/5 & 3/5 & \\[4pt] \,\,\,4\,\,\, & \,\,\,11/12\,\,\, & \,\,\,1/12\,\,\, & \,\,\,1/12\,\,\, & \,\,\,11/12\,\,\, & \displaystyle\,\,\,\frac{38}{5625}\,f^{(4)}(\bar{x})\,\,\, \\[8pt]\hline \end{array}

If we use q quadrature points, the Newton-Cotes quadrature is exact for all functions that are polynomials of degree (q - 1) or less. We can see this from the error term. \bar{x} represents some number in [-1,\,1] and differs from one quadrature to another.

The error also shows that, when q is odd, we get an extra accuracy. For example, the 3-point quadrature is exact for cubic polynomials, not just for quadratic polynomials. The 4-point quadrature is as accurate as the 3-point (it may have a smaller error), but requires us to evaluate f at one additional point.

b. Gauss quadrature

We arrived at a q-point quadrature that is exact for polynomials of degree (q - 1) by assuming that the points are equally spaced apart.

If we instead optimize the locations of the quadrature points along with their weights, we get a quadrature that is optimal in 1D. Known as Gauss quadrature, it is exact for polynomials of degree up to (2q - 1). Each additional quadrature point lets us double the degree of the polynomial that we can handle! No other q-point quadrature can achieve more accuracy in 1D.

Gauss quadrature comes from orthogonal polynomials called Legendre polynomials. The topic is out of scope, so I’ll just list the quadrature scheme below:

\begin{array}{|c|cccc|c|} \hline & x_{1} & x_{2} & x_{3} & x_{4} & \\[4pt] q & w_{1} & w_{2} & w_{3} & w_{4} & \mbox{error} \\[4pt]\hline & 0 & & & & \\[4pt] \,\,\,1\,\,\, & 2 & & & & \displaystyle\frac{1}{3}\,f^{(2)}(\bar{x}) \\[8pt]\hline & -\sqrt{1/3} & \sqrt{1/3} & & & \\[4pt] \,\,\,2\,\,\, & 1 & 1 & & & \displaystyle\frac{1}{135}\,f^{(4)}(\bar{x}) \\[8pt]\hline & -\sqrt{3/5} & 0 & \sqrt{3/5} & & \\[4pt] \,\,\,3\,\,\, & 5/9 & 8/9 & 5/9 & & \displaystyle\frac{1}{15750}\,f^{(6)}(\bar{x}) \\[8pt]\hline & -0.86114 & -0.33998 & 0.33998 & 0.86114 & \\[4pt] \,\,\,4\,\,\, & \,\,\,0.34785\,\,\, & \,\,\,0.65215\,\,\, & \,\,\,0.65215\,\,\, & \,\,\,0.34785\,\,\, & \displaystyle\,\,\,\frac{1}{3472875}\,f^{(8)}(\bar{x})\,\,\, \\[8pt]\hline \end{array}

3. FEM Equations for Elastostatics

a. Equilibrium equation

We want to find the displacement field \vec{u} \in \mathbb{R}^{3} that satisfies the equation,

\displaystyle\int\limits_{\Omega}\,\sigma : \delta\varepsilon\,\,dV \,=\, \int\limits_{\Omega}\,\vec{b} \cdot \delta\vec{u}\,\,dV + \int\limits_{\partial\Omega}\,\vec{t} \cdot \delta\vec{u}\,\,dS

for all admissible test functions \delta\vec{u} \in \mathbb{R}^{3}. In the context of Principle of Virtual Work, we call test functions, virtual displacements.

Because the stress and strain tensors are symmetric, we can write their inner product \sigma : \delta\varepsilon as the dot product between the stress “vector” and the (virtual) strain “vector” using the engineering strains. In other words,

\begin{aligned} \displaystyle\int\limits_{\Omega}\,\sigma : \delta\varepsilon\,\,dV &= \displaystyle\int\limits_{\Omega}\,\bigl(\sigma_{xx}\delta\varepsilon_{xx} + \sigma_{yy}\delta\varepsilon_{yy} + \sigma_{zz}\delta\varepsilon_{zz} + \bigr. \\[-4pt] & \mbox{\hspace{1.2cm}} \bigl.\sigma_{xy}\delta\varepsilon_{xy} + \sigma_{yx}\delta\varepsilon_{yx} + \sigma_{xz}\delta\varepsilon_{xz} + \sigma_{zx}\delta\varepsilon_{zx} + \sigma_{yz}\delta\varepsilon_{yz} + \sigma_{zy}\delta\varepsilon_{zy}\bigr)\,\,dV \\[16pt] &= \int\limits_{\Omega}\,\bigl(\sigma_{xx}\delta\varepsilon_{xx} + \sigma_{yy}\delta\varepsilon_{yy} + \sigma_{zz}\delta\varepsilon_{zz} + \sigma_{yz}\delta\gamma_{yz} + \sigma_{xz}\delta\gamma_{xz} + \sigma_{xy}\delta\gamma_{xy}\bigr)\,\,dV \\[16pt] &= \int\limits_{\Omega}\,\left[\begin{array}{c} \sigma_{xx} \\[8pt] \sigma_{yy} \\[8pt] \sigma_{zz} \\[8pt]\hline \sigma_{yz} \\[8pt] \sigma_{xz} \\[8pt] \sigma_{xy} \end{array}\right] \cdot \left[\begin{array}{c} \delta\varepsilon_{xx} \\[8pt] \delta\varepsilon_{yy} \\[8pt] \delta\varepsilon_{zz} \\[8pt]\hline \delta\gamma_{yz} \\[8pt] \delta\gamma_{xz} \\[8pt] \delta\gamma_{xy} \end{array}\right]\,\,dV \end{aligned}

This technique is known as Voigt notation.

b. Constitutive equation

For an isotropic and homogeneous material, the stress and strain fields (the real one) are related as follows:

\displaystyle \sigma_{ij} = C_{ijkl}\varepsilon_{kl},\,\,\,C_{ijkl} = \lambda\,\delta_{ij}\delta_{kl} + 2\mu \cdot \frac{1}{2}\bigl(\delta_{ik}\delta_{jl} + \delta_{il}\delta_{jk}\bigr)

In Voigt notation, we get,

\left[\begin{array}{c} \sigma_{xx} \\[8pt] \sigma_{yy} \\[8pt] \sigma_{zz} \\[8pt]\hline \sigma_{yz} \\[8pt] \sigma_{xz} \\[8pt] \sigma_{xy} \end{array}\right] = \left[\begin{array}{ccc|ccc} \lambda + 2\mu & \lambda & \lambda & & & \\[8pt] \lambda & \lambda + 2\mu & \lambda & & & \\[8pt] \lambda & \lambda & \lambda + 2\mu & & & \\[8pt]\hline & & & \,\,\,\,\,\,\mu\,\,\,\,\,\, & & \\[8pt] & & & & \,\,\,\,\,\,\mu\,\,\,\,\,\, & \\[8pt] & & & & & \,\,\,\,\,\,\mu\,\,\,\,\,\, \end{array}\right] \left[\begin{array}{c} \varepsilon_{xx} \\[8pt] \varepsilon_{yy} \\[8pt] \varepsilon_{zz} \\[8pt]\hline \gamma_{yz} \\[8pt] \gamma_{xz} \\[8pt] \gamma_{xy} \end{array}\right]

Note that Lamé’s constant \lambda and the shear modulus \mu are related to Young’s modulus E and Poisson’s ratio \nu as follows:

\displaystyle (\lambda + 2\mu) = \frac{E(1 - \nu)}{(1 + \nu)(1 - 2\nu)},\,\,\,\lambda = \frac{E\nu}{(1 + \nu)(1 - 2\nu)},\,\,\,\mu = \frac{E}{2(1 + \nu)}

c. Strain-displacement relation

For a linearly elastic material (with small deformations),

\displaystyle\varepsilon_{ij} = \frac{1}{2}\bigl(u_{i,j} + u_{j,i}\bigr)

In Voigt notation,

\displaystyle\left[\begin{array}{c} \varepsilon_{xx} \\[8pt] \varepsilon_{yy} \\[8pt] \varepsilon_{zz} \\[8pt]\hline \gamma_{yz} \\[8pt] \gamma_{xz} \\[8pt] \gamma_{xy} \end{array}\right] = \left[\begin{array}{ccc} \frac{\partial}{\partial x} & & \\[8pt] & \frac{\partial}{\partial y} & \\[8pt] & & \frac{\partial}{\partial z} \\[-8pt] \\\hline \\[-8pt] & \frac{\partial}{\partial z} & \frac{\partial}{\partial y} \\[8pt] \frac{\partial}{\partial z} & & \frac{\partial}{\partial x} \\[8pt] \frac{\partial}{\partial y} & \frac{\partial}{\partial x} & \end{array}\right]\left[\begin{array}{c} u_{x} \\[8pt] u_{y} \\[8pt] u_{z} \end{array}\right]

We assume that this relation also holds for the virtual fields.

d. Finite-dimensional subspace

Finally, we approximate the displacement field and the virtual displacement field by following the isoparametric approach. For example, when we use tetrahedral linear elements, we have the following relation on each element:

\left[\begin{array}{c} u_{x} \\[8pt] u_{y} \\[8pt] u_{z} \end{array}\right] = \left[\begin{array}{ccc|ccc|ccc|ccc} N_{1} & 0 & 0 & N_{2} & 0 & 0 & N_{3} & 0 & 0 & N_{4} & 0 & 0 \\[8pt] 0 & N_{1} & 0 & 0 & N_{2} & 0 & 0 & N_{3} & 0 & 0 & N_{4} & 0 \\[8pt] 0 & 0 & N_{1} & 0 & 0 & N_{2} & 0 & 0 & N_{3} & 0 & 0 & N_{4} \end{array}\right] \left[\begin{array}{c} u_{1} \\[8pt] v_{1} \\[8pt] w_{1} \\[8pt]\hline u_{2} \\[8pt] v_{2} \\[8pt] w_{2} \\[8pt]\hline u_{3} \\[8pt] v_{3} \\[8pt] w_{3} \\[8pt]\hline u_{4} \\[8pt] v_{4} \\[8pt] w_{4} \end{array}\right]

4. Exercises

a. Problem 1

Consider a steel pile of length L = 4\,\mbox{m}, Young’s modulus E = 210\,\mbox{GPa}, and cross-sectional area A = \pi(0.2)^2\,\mbox{m}^{2}.

A steel pile embedded in soil
A steel pile embedded in soil.

The pile is fixed at the bottom and is assumed to be a Winkler foundation across the length. In other words, we assume that the soil exerts a force that is opposite and linearly proportional to the displacement of the pile, i.e. q = q(x) = ku(x) for some constant k (called foundation stiffness). Note, q has the unit of \mbox{N}/\mbox{m}.

We want to know how the pile deforms due to the load P_{0} = 10^{5}\,\mbox{N} at its head, the distributed load f = f(x)\,\mbox{N}/\mbox{m}, and the lateral force q = ku(x) due to the soil. For simplicity, we consider k = EA/L^{2} and f(x) = P_{0}/L below.

(a) From equilibrium of a differential element, derive the governing equation and the BCs for the displacement of the pile u = u(x).

(b) Find the weak form. Assuming an element has (p + 1) basis functions, write down the element stiffness matrix and the element load vector.

(c) From the weak form, obtain the governing equation and the BCs that u satisfies. (In other words, show that we can return to the strong form.)

(d) Consider 1 element with a linear basis (p = 1). Find the FE solution u_{h} and calculate the internal force F_{h}(0) = EA\,u_{h}'(0) at the pile head. What do you see?

(e) Now, consider 2 elements, each with a linear basis (p = 1).

(f) Lastly, consider 1 and 2 elements with a quadratic basis (p = 2) in each. Assume that the middle node is placed at the element’s center.

Solution