## Topics in Computational Mechanics: Part 1

Over the next few posts, I will share the writings and programs that I had created for a class in computational structural analysis—how we efficiently analyze a structure using numerical methods. The field is a subset of computational mechanics, which combines the disciplines of mathematics, computer science, and engineering.

The class consisted of juniors and seniors in mechanical or aerospace engineering. In other words, for brevity, I will assume that you have some familiarity with statics, linear algebra, and calculus. If time permits in future, I will upload my class notes to fill gaps and share more drawings.

## 3 Projects for Teaching Numerical Linear Algebra

In a recent conversation, I realized that I had forgotten to post some of numerical linear algebra (NLA) projects that I had created in graduate school. These projects, along with a few others that I have already published in this blog, can help us appreciate the theories and applications of NLA.

## Iterative Methods: Part 3

Let’s look at one more way to solve the equation $A\vec{x} = \vec{b}$. We assume that $A \in \mathbb{R}^{n \times n}$ is nonsingular, and define the $k$-th Krylov subspace as follows:

$\mathscr{K}_{k} \,:=\, \mbox{span}\{\vec{b},\,A\vec{b},\,\cdots,\,A^{k - 1}\vec{b}\}$.

Krylov subspace methods are efficient and popular iterative methods for solving large, sparse linear systems. When $A$ is symmetric, positive definite (SPD), i.e.

$\left\{\begin{array}{l} A^{T} = A \\[12pt] \vec{x}^{T}A\vec{x} > 0,\,\,\,\forall\,\vec{x} \neq \vec{0}, \end{array}\right.$

we can use a Krylov subspace method called Conjugate Gradient (CG).

Today, let’s find out how CG works and use it to solve 2D Poisson’s equation.

## Iterative Methods: Part 2

Last time, we looked at 2D Poisson’s equation and discussed how to arrive at a matrix equation using the finite difference method. The matrix, which represents the discrete Laplace operator, is sparse, so we can use an iterative method to solve the equation efficiently.

Today, we will look at Jacobi, Gauss-Seidel, Successive Over-Relaxation (SOR), and Symmetric SOR (SSOR), and a couple of related techniques—red-black ordering and Chebyshev acceleration. In addition, we will analyze the convergence of each method for the Poisson’s equation, both analytically and numerically.

## Iterative Methods: Part 1

Once again, we consider solving the equation $A\vec{x} = \vec{b}$, where $A \in \mathbb{R}^{n \times n}$ is nonsingular. When we use a factorization method for dense matrices, such as LU, Cholesky, SVD, and QR, we make $O(n^{3})$ operations and store $O(n^{2})$ matrix entries. As a result, it’s hard to accurately model a real-life problem by adding more variables.

Luckily, when the matrix is sparse (e.g. because we used finite difference method or finite element method), we can use an iterative method to approximate the solution efficiently. Typically, an iterative method incurs $O(n^{2})$ operations, if not fewer, and requires $O(n)$ storage.

Over the next few posts, we will look at 5 iterative methods:

• Jacobi
• Gauss-Seidel
• Successive Over-Relaxation (SOR)
• Symmetric SOR (SSOR)