Skip to main content

Spanning Trees and Direct Solvers


This blog entry will be entirely dedicated to a subject that absorbed me for more than one year: support graph preconditioners. When one have to solve say a laplacian equation on a mesh the numerical discretization  the problem to solve is generally of the form Ax=b, where A is a sparse matrix with many zero entries. More importantly the non-zero entries have a pattern direclty inherited from the mesh connectivity. For linear elements, (i,j) entry is non void if and only if there is an edge between node i and node j (what's cool with the laplacian is that a node is a degree of freedom).

So, as long as the mesh has a moderate connectivity pattern, the matrix inherits  that pattern and is therefore relatively sparse. If you factorize your matrix with a direct solver then the work your computer has to do is directly linked to the connectivity of the underlying mesh. For instance if the underlying mesh is a tree, then the complexity of the direct solver will be linear with respect to the number of nodes in the tree. Not bad.

If your graph is planar (like in the 4-color theorem), then complexity is proportional to the number of node at power 3/2. Not linear, but almost.
 So trees are provably the easiest graph (or mesh, who care) to factorize because they have very low (linear) complexity. The idea of support graph preconditioners is to exploit that fact and construct a approximation of the original matrix which underlying graph resembles as much as possible to a tree.
For a laplacian matrix, it is possible to construct such approximation by considering a minimum spanning tree of the mesh. Of course, it is not as easy as that! To get an efficient preconditioner one have to do something on the approximation, generally adding few additional edges in the spanning tree to reduce the condition number of the preconditioned system (see [3] for instance).

I think it's funny to see that my current work to make solvers faster is very similar to the child game of reconnecting a set of points without lifting the pen and without passing two times on the same edge.


A regular mesh...



... and one of its spanning trees !


However we can talk about the (undirected) graph of a matrix only for particular matrices, like laplacian matrices which are symmetric positive definite matrices with positives entries on the diagonal and only negative entries on the extra-diagonal. In that particular case, we have a isomorphism between the weighted undirected graph and its matrix, but that is a rather technical footnote.

References:
[1] Dan Spielman website (here) have a wonderful talk by him (here)
[2] Bruce Hendrickson webpage (here)
[3] Doron Chen and Sivan Toledo: Vaidya's preconditioners: Implementation and experimental study

Comments

Popular posts from this blog

Shear waves, medecine and brain

Yesterday evening, too bored by what TV was proposing to me, I decided to watch a conference of Mathias Fink , a french researcher working on multidisciplinary application of waves. Specially shear waves.  Here is a brief summary of his talk. In solids, waves have two principal components:  compression waves (P-waves for primary) moving in the direction of propagation, and shear waves (S-waves, for secondary) that make ripples in the plane orthogonal to that direction. Since compression waves propagate in the direction of propagation, they move faster than shear waves. Usually ultrasound equipment in medicine only use compressional waves. But since human tissues have a high bulk modulus, the P-wave speed is relatively constant (around 1580 m/s). Human tissues are very stiff if you apply isotropic constraints on them (like pressure of water). However M. Fink and his colleagues proposed a new way to investigate human tissues by first sending a strong compressional wave in ...

We're not playing dices !

Software programming and design is an amazingly complex task. Specially when it concerns numerical applications, that generally require optimization to get results in a reasonable time. For that purpose the basic pattern is usually to write the code as simple as possible, debug it and when it works to begin the optimization process. This asks for some nerves, and patience, two things I usually don't have in real life , but for programming yes. This design pattern makes sense because we are doing things really sequentially. You first wrote some c++ class and then add some feature progressively, and when you come up with essentially different concept (I mean a concept that should be well separated from the first one), you write an other class and so on... Usually the class-writing process follows the solving process you have in mind. For instance you have to make some initialization on your model first, then you declare some variables for the computation, the computation then f...

5 Tips to work with legacy code

As engineers, we like to move things forward and, for those who have a little bit of experience (like me), having to work with legacy code can be a huge set back because we know it can be long, painful and slow-paced. But you don't have to make it harder that it needs to be for you and your team! Below are some common mistakes that occur when working with legacy code and possible ways to overcome them. 1. Should you really use it? That's probably the first and foremost question. Is it really necessary for your application to tap into this legacy code? Have you done extensive researches to see if there isn't a more modern library out there, with better licensing, design, architecture, library initialization, newest code features, documentation, unit tests, whatever than this old piece of code which is on your shelves? In case there is, ponder with caution the possible consequences of any choice, using as many criteria that you care for! Remember that this is an important cha...