Skip to main content

Robust Stable Objects Deformation

In this entry, I'll briefly speak about computing robustly the deformation on a given object represented by a Finite Element mesh. There are a handful of methods to do that more or less robustly, and I'll just discuss them, with a speaking a little bit about their distinctive aspects.

Classic lagrangian formulation

The most used one in industrial commercial packages (like abaqus, ansys, etc). This is simply a linearization of Green-Lagrange strain tensor and deriving it to get the proper residual and stiffness matrix from one single Newton step. This method is absolutely rigourous, meaning that as long your mechanical behavior is well captured by the strain model  you'll get reliable results. However it has two main drawbacks: first you have to be careful when applying new forces or constraints and do it incrementally otherwise you may really blow up your model. For instance applying too much force will cause some elements to invert which won't be re-inverted thanks to the internal force (the quadratic strain tensor is invariant to such inversions, the element will 'feel' the same strain inverted or not and behave the same). Second, this method is very computationally expensive, mainly because you have to do things with very little increments.

Nonlinear finite element analysis (FEA) done with MSC MD-Nastran including large deformation, buckling, and self-contact. Source: http://www.deskeng.com/virtual_desktop/?p=2248

Irving and al. method

I like this method a lot, because it's elegant. The general idea is to treat robustly elements inversion. This treats only inversions that occurs during the deformation of the mesh and not the initially inverted, or closely inverted ('slivers') element.


In general, for most industrial application, you don't care about elements inversions since they don't happen frequently. But in the computer graphics community or in the game industry, you may want to do things in a less controlled manner, and let the mechanical/dynamical solver deal with this issue.

The general idea is to compute, in each element, a space where the deformation gradient F is diaognal. Element singularities implies deformation gradient with singular eigenvalues, which can therefore be treated consistently. From that decomposition, it is not too difficult (at least for isotropic case, or other simple material behavior) to derive the residual and stiffness matrices as functionally depending upon the deformation gradient. So what you get is a robust formulation of your element matrices and residual relative to element inversion. The result of those is very impressive (link to impressive video). But as the technique suggest, it is really time consumming because of the diagonalization the deformation gradient.


Corotational formulations

In many case, using large deformation is only justified to avoid the effect of the rotation on the linear approximation of the Cauchy strain. The main artifact that it creates is that the model unphysically grow due to rotations. In fact the Cauchy strain is not invariant by rotations and is only justified in case of small rotations where the linearization makes sense (theta instead of sin(theta)). However the Gree-Lagrange is invariant by rotation of the element and therefore is well suited to treat that, but as we saw not the perfect solution.


One possibility, if you aren't interested in large strains, is to get an approximation of the local rotation (like the element rotation for instance) and express the linear force-displacement relationship into a rotated frame. Locally this amounts simply to a rotation. In that field a very good reference is 'Corotational Simulation of Deformable Solids' by Hauth and al. This method can be made very fast, and also be efficently parallelized on the GPU.





Comments

Popular posts from this blog

5 Tips to work with legacy code

As engineers, we like to move things forward and, for those who have a little bit of experience (like me), having to work with legacy code can be a huge set back because we know it can be long, painful and slow-paced. But you don't have to make it harder that it needs to be for you and your team! Below are some common mistakes that occur when working with legacy code and possible ways to overcome them. 1. Should you really use it? That's probably the first and foremost question. Is it really necessary for your application to tap into this legacy code? Have you done extensive researches to see if there isn't a more modern library out there, with better licensing, design, architecture, library initialization, newest code features, documentation, unit tests, whatever than this old piece of code which is on your shelves? In case there is, ponder with caution the possible consequences of any choice, using as many criteria that you care for! Remember that this is an important cha

Shear waves, medecine and brain

Yesterday evening, too bored by what TV was proposing to me, I decided to watch a conference of Mathias Fink , a french researcher working on multidisciplinary application of waves. Specially shear waves.  Here is a brief summary of his talk. In solids, waves have two principal components:  compression waves (P-waves for primary) moving in the direction of propagation, and shear waves (S-waves, for secondary) that make ripples in the plane orthogonal to that direction. Since compression waves propagate in the direction of propagation, they move faster than shear waves. Usually ultrasound equipment in medicine only use compressional waves. But since human tissues have a high bulk modulus, the P-wave speed is relatively constant (around 1580 m/s). Human tissues are very stiff if you apply isotropic constraints on them (like pressure of water). However M. Fink and his colleagues proposed a new way to investigate human tissues by first sending a strong compressional wave in the tissu

on Serendipity of things