Skip to main content

Random by Design





For more than a year or so, I am completely amazed to realize that what I would call the random approach, both in terms of computational algorithm and hardware design, has unexpected but very encouraging properties.

Microprocessor come up with a many error-correcting processes, using a large amount of overall CPU resources (energy, wall-clock time, etc). By allowing the hardware to make a few mistakes, managed to be under some probability law, scientists of the Rice-NTU Institute for Sustainable and Applied Info dynamics (ISAID) lead by Krishna Palem, showed that significant gain could be possible, both in term of energy demand and performance.

Also by trimming away (pruning in the jargon) some rarely used portions of the chip and confining locally voltage requirements researchers  have been able to take advantage in energy requirements.

“In the latest tests, we showed that pruning could cut energy demands 3.5 times with chips that deviated from the correct value by an average of 0.25 percent,” said study co-author Avinash Lingamneni, a Rice graduate student. “When we factored in size and speed gains, these chips were 7.5 times more efficient than regular chips. Chips that got wrong answers with a larger deviation of about 8 percent were up to 15 times more efficient.”


So microprocessor may be the key to some interesting applications, like in the I-slate tablet designed for Indian classrooms with no electricity.
 

Source: here

Comments

Popular posts from this blog

5 Tips to work with legacy code

As engineers, we like to move things forward and, for those who have a little bit of experience (like me), having to work with legacy code can be a huge set back because we know it can be long, painful and slow-paced. But you don't have to make it harder that it needs to be for you and your team! Below are some common mistakes that occur when working with legacy code and possible ways to overcome them. 1. Should you really use it? That's probably the first and foremost question. Is it really necessary for your application to tap into this legacy code? Have you done extensive researches to see if there isn't a more modern library out there, with better licensing, design, architecture, library initialization, newest code features, documentation, unit tests, whatever than this old piece of code which is on your shelves? In case there is, ponder with caution the possible consequences of any choice, using as many criteria that you care for! Remember that this is an important cha

Shear waves, medecine and brain

Yesterday evening, too bored by what TV was proposing to me, I decided to watch a conference of Mathias Fink , a french researcher working on multidisciplinary application of waves. Specially shear waves.  Here is a brief summary of his talk. In solids, waves have two principal components:  compression waves (P-waves for primary) moving in the direction of propagation, and shear waves (S-waves, for secondary) that make ripples in the plane orthogonal to that direction. Since compression waves propagate in the direction of propagation, they move faster than shear waves. Usually ultrasound equipment in medicine only use compressional waves. But since human tissues have a high bulk modulus, the P-wave speed is relatively constant (around 1580 m/s). Human tissues are very stiff if you apply isotropic constraints on them (like pressure of water). However M. Fink and his colleagues proposed a new way to investigate human tissues by first sending a strong compressional wave in the tissu

Robust Stable Objects Deformation

In this entry, I'll briefly speak about computing robustly the deformation on a given object represented by a Finite Element mesh. There are a handful of methods to do that more or less robustly, and I'll just discuss them, with a speaking a little bit about their distinctive aspects. Classic lagrangian formulation The most used one in industrial commercial packages (like abaqus, ansys, etc). This is simply a linearization of Green-Lagrange strain tensor and deriving it to get the proper residual and stiffness matrix from one single Newton step. This method is absolutely rigourous, meaning that as long your mechanical behavior is well captured by the strain model  you'll get reliable results. However it has two main drawbacks: first you have to be careful when applying new forces or constraints and do it incrementally otherwise you may really blow up your model. For instance applying too much force will cause some elements to invert which won't be re-inverted thanks