Skip to main content

Networks and Life

As you probably may (or may not!) know, molecular biology often study biological functions from interaction network between molecules rather than studying each component one-by-one. It's the opposite of the universal divide-and-conquer strategy, I would call it the all-inclusive strategy.
Those interactions networks involves myriads (10.000) of molecules that interacts by various chemical ways, which is generally represented as an oriented graph between each molecular compound. The transcriptional networks describe the relationship between genes and proteins, the protein-protein networks defines the cascades of interactions between some, ingenuously lumped, proteins, the metabolic networks attempt to mimic the flush of metabolic reactions inside living organisms.  So the idea is to understand how the  main 'thing' works from all those interactions linked together.



Of course, other kind of networks are used in many different domain to study more-or-less linked items such as social networks (linking people that met 10 years ago in high-school and who don't have anything in common except the same social network), web pages, etc, etc.
 In 2002, the team of professor Uri Alon from the Weizmann Institute of Science realized that those networks generally contain smaller repetitive networks, called patterns appearing in biological networks with much more probability than in random networks, so they should be there for a reason. The general idea is that some biological sub-functions are recurrently defined by similar chains of triggering events.
 
Some pattern occuring in biological networks

 
Therefore, mathematicians have been looking for a method to capture the effect of those smaller sub-networks. For instance, the model of Erdös-Rényi, obtained by linking two nodes with a constant probability p in [0,1] have some features of those natural networks (smaller world with shorter closed paths), but not all.

Recently, E. Ravasz and A. L. Barabási developed the concept of hierarchical networks: network is formed by aggregating with the same pattern in a hierarchical manner. First nodes are aggregated with the base pattern giving a handfull of the same pattern, which are themselve connected with the same pattern etc, etc.



In hierarchical networks, the clustering rate ( average of links between neighbors of a node and the potential number of links between them, ie a clique) is close to 0.606 for the above graph is a constant independent of the number of nodes in the graph. But the c(k) function, measuring the clustering rate per node degree k, is a power-law c(k) = 1/k. Those researcher proved [2] that the metabolic network of Eschiria coli, after reduction, is very similar to a hierarchical network.


 Source: Images des mathématiques Fernando Alcade (in french)

[2]Ravasz, A. L. Somera, D. A. Mongru, Z. N. Oltvai, A. L. Barabási, Hierarchical organization of modularity in Metabolic networks. Science, 297 (2002), 1551-1555.

Comments

Popular posts from this blog

Shear waves, medecine and brain

Yesterday evening, too bored by what TV was proposing to me, I decided to watch a conference of Mathias Fink , a french researcher working on multidisciplinary application of waves. Specially shear waves.  Here is a brief summary of his talk. In solids, waves have two principal components:  compression waves (P-waves for primary) moving in the direction of propagation, and shear waves (S-waves, for secondary) that make ripples in the plane orthogonal to that direction. Since compression waves propagate in the direction of propagation, they move faster than shear waves. Usually ultrasound equipment in medicine only use compressional waves. But since human tissues have a high bulk modulus, the P-wave speed is relatively constant (around 1580 m/s). Human tissues are very stiff if you apply isotropic constraints on them (like pressure of water). However M. Fink and his colleagues proposed a new way to investigate human tissues by first sending a strong compressional wave in ...

We're not playing dices !

Software programming and design is an amazingly complex task. Specially when it concerns numerical applications, that generally require optimization to get results in a reasonable time. For that purpose the basic pattern is usually to write the code as simple as possible, debug it and when it works to begin the optimization process. This asks for some nerves, and patience, two things I usually don't have in real life , but for programming yes. This design pattern makes sense because we are doing things really sequentially. You first wrote some c++ class and then add some feature progressively, and when you come up with essentially different concept (I mean a concept that should be well separated from the first one), you write an other class and so on... Usually the class-writing process follows the solving process you have in mind. For instance you have to make some initialization on your model first, then you declare some variables for the computation, the computation then f...

5 Tips to work with legacy code

As engineers, we like to move things forward and, for those who have a little bit of experience (like me), having to work with legacy code can be a huge set back because we know it can be long, painful and slow-paced. But you don't have to make it harder that it needs to be for you and your team! Below are some common mistakes that occur when working with legacy code and possible ways to overcome them. 1. Should you really use it? That's probably the first and foremost question. Is it really necessary for your application to tap into this legacy code? Have you done extensive researches to see if there isn't a more modern library out there, with better licensing, design, architecture, library initialization, newest code features, documentation, unit tests, whatever than this old piece of code which is on your shelves? In case there is, ponder with caution the possible consequences of any choice, using as many criteria that you care for! Remember that this is an important cha...