In the past 2 weeks, I tasked myself with helping users create and visualize LDAP filters. An LDAP filter allows users to make a search in a powerful manner, and can take any of these forms:
How many of you like to eat, play, and most importantly, drink? (Hopefully, everyone!)
I helped create an app called LocALL. It gives 50,000 recommendations for short trips to eat, play, and drink and support local businesses in Austin. We can extend our app to cover ~400 metropolitan cities in the US and create 20 million recommendations. (Big data and machine learning!)
At the heart of our solution is math. We used spherical geometry and probability to create simple, fast recommendations.
Let’s look at one more way to solve the equation . We assume that is nonsingular, and define the -th Krylov subspace as follows:
Krylov subspace methods are efficient and popular iterative methods for solving large, sparse linear systems. When is symmetric, positive definite (SPD), i.e.
we can use a Krylov subspace method called Conjugate Gradient (CG).
Today, let’s find out how CG works and use it to solve 2D Poisson’s equation.
Last time, we looked at 2D Poisson’s equation and discussed how to arrive at a matrix equation using the finite difference method. The matrix, which represents the discrete Laplace operator, is sparse, so we can use an iterative method to solve the equation efficiently.
Today, we will look at Jacobi, Gauss-Seidel, Successive Over-Relaxation (SOR), and Symmetric SOR (SSOR), and a couple of related techniques—red-black ordering and Chebyshev acceleration. In addition, we will analyze the convergence of each method for the Poisson’s equation, both analytically and numerically.
Once again, we consider solving the equation , where is nonsingular. When we use a factorization method for dense matrices, such as LU, Cholesky, SVD, and QR, we make operations and store matrix entries. As a result, it’s hard to accurately model a real-life problem by adding more variables.
Luckily, when the matrix is sparse (e.g. because we used finite difference method or finite element method), we can use an iterative method to approximate the solution efficiently. Typically, an iterative method incurs operations, if not fewer, and requires storage.
Over the next few posts, we will look at 5 iterative methods:
- Successive Over-Relaxation (SOR)
- Symmetric SOR (SSOR)
- Conjugate Gradient (CG).
In addition, we will solve 2D Poisson’s equation using these methods. We will compare the approximate solutions to the exact to illustrate the accuracy of each method.