Last time, we looked at 2D Poisson’s equation and discussed how to arrive at a matrix equation using the finite difference method. The matrix, which represents the discrete Laplace operator, is sparse, so we can use an iterative method to solve the equation efficiently.
Today, we will look at Jacobi, Gauss-Seidel, Successive Over-Relaxation (SOR), and Symmetric SOR (SSOR), and a couple of related techniques—red-black ordering and Chebyshev acceleration. In addition, we will analyze the convergence of each method for the Poisson’s equation, both analytically and numerically.
Continue reading “Iterative Methods: Part 2”
Once again, we consider solving the equation , where is nonsingular. When we use a factorization method for dense matrices, such as LU, Cholesky, SVD, and QR, we make operations and store matrix entries. As a result, it’s hard to accurately model a real-life problem by adding more variables.
Luckily, when the matrix is sparse (e.g. because we used finite difference method or finite element method), we can use an iterative method to approximate the solution efficiently. Typically, an iterative method incurs operations, if not fewer, and requires storage.
Over the next few posts, we will look at 5 iterative methods:
- Successive Over-Relaxation (SOR)
- Symmetric SOR (SSOR)
- Conjugate Gradient (CG).
In addition, we will solve 2D Poisson’s equation using these methods. We will compare the approximate solutions to the exact to illustrate the accuracy of each method.
Continue reading “Iterative Methods: Part 1”
Today, I make my Isogeometric Analysis library available to you.
Written in Matlab, it reflects the time and effort that I spent in graduate school. It also shows unfortunate age, as B-splines and NURBS are fully supported, but not T-splines and other basis that are state-of-the-art. One day, I hope to return.
You can find my Isogeometric Analysis library here:
Download from GitHub
If you like regular, vanilla Finite Element Analysis:
Download from GitHub
In numerical linear algebra, we create ways for a computer to solve a linear system of equations . In doing so, we analyze how efficiently and accurately we can find the solution .
Perturbation theory concerns how much error we incur in the solution when we perturb (spoil) the data and . A classic statement tells us that the amount of error depends on the condition number of the matrix .
I will define and prove the statement, and help you understand it by “hearing” it.
Continue reading “Hearing Perturbation Theory”
This year, I took up cross-stitching to decorate my place. I picked a complex design for the heck of it, too. Shortly after I inserted three strands into the fabric, I realized that we could cast cross-stitch as an optimization problem (shortest path).
See, if your strands haphazardly jump from one block to another, you will suffer a shortage of strands. The key is to, whenever possible, move from one block to an adjacent one horizontally or vertically by 1 unit. If that is not possible, then move diagonally in the fewest units possible.
By following these procedures, we get x’s on the front and a neat array of l’s on the back. I even saved enough strands to create a full border.