Sebastian Pokutta's Blog

Mathematics and related topics

Archive for May 2009

Man vs. machine: protein folding

with one comment

Protein structure prediction is an important problem in order to develop new treatments against diseases. The structure determines how the protein works to a large extent and thus being able to predict the structure is essential in order to develop new drugs (see fold.it):

Protein structure prediction: As described above, knowing the structure of a protein is key to understanding how it works and to targeting it with drugs. A small proteins can consist of 100 amino acids, while some human proteins can be huge (1000 amino acids). The number of different ways even a small protein can fold is astronomical because there are so many degrees of freedom. Figuring out which of the many, many possible structures is the best one is regarded as one of the hardest problems in biology today and current methods take a lot of money and time, even for computers. Foldit attempts to predict the structure of a protein by taking advantage of humans’ puzzle-solving intuitions and having people play competitively to fold the best proteins.

Unfortunately, from a computational point of view, structure prediction is a tough problem. The problem has a significant 3D geometrical component which makes it accessible for humans though. Exploiting this fact is what fold.it aims for. It is a game that has humans fold proteins presented as puzzles. The goals of fold.it are:

For protein structure prediction, the eventual goal is to have human folders work on proteins that do not have a known structure. This would require first attracting the attention of scientists and biotech companies and convincing them that the process is effective. Another goal is to take folding strategies that  human players have come up with while playing the game, and automate these strategies to make protein-prediction software more effective. These two goals are more or less independent and either or both may happen.

The more interesting goal for Foldit, perhaps, is not in protein prediction but protein design. Designing new proteins may be more directly practical than protein prediction, as the problem you must solve as a protein designer is basically an engineering problem (protein engineering), whether you are trying to disable a virus or scrub carbon dioxide from the atmosphere. It’s also a relatively new field compared to protein prediction. There aren’t a lot of automated approaches to protein design, so Foldit’s human folders will have less competition from the machines.

Wired run an interesting article a few weeks ago about fold.it and its performance in the CASP (Critical Assessment of Techniques for Protein Structure Prediction) competition where the rankings of the human folded proteins were pretty competitive. Exploiting human intelligence for certain computations is not completely new. Captchas were probably one of the first applications that exploited the human pattern recognition capabilities in order to differentiate between humans and machines. Amazon is running a whole service “MTurk” which provides an infrastructure to utilize “idling Internet users”. The new aspect here might be the design problem mentioned above which aims for constructing proteins with specific properties by leveraging not only human pattern recognition capabilities but also creativity.

Written by Sebastian

May 26, 2009 at 9:24 pm

Gurobi standalone version released

with one comment

Running the risk of repeating the news (see also here, and here), the standalone version of Gurobi has been released. See the other posts for more information.

Written by Sebastian

May 19, 2009 at 5:01 pm

Posted in Software

Rama Cont on contagious default and systemic risk

with 2 comments

A few days ago (May, 14th) Rama Cont from Columbia gave a very interesting talk at the Frankfurt School of Finance & Management about contagious default and systemic risk in financial networks. From the abstract:

The ongoing financial crisis has simultaneously underlined the importance of systemic risk and the lack of adequate indicators for measuring and monitoring it. After describing some important structural features of banking networks, we propose an indicator for measuring the systemic impact of the failure of a financial institution –the Systemic Risk Index– which combines a traditional factor-based modeling of financial risks with network contagion effects resulting from mutual exposures. Simulation studies on networks with realistic structures -in particular using data from the Brazilian interbank network- underline the importance of network structure in assessing financial stability and point to the importance of leverage and liquidity ratios of financial institutions as tools for monitoring and controling systemic risk. In particular, we investigate the role played by credit default swap contracts and their impact on financial stability and systemic risk. Out study leads to some policy implications for a more efficient monitoring of systemic risk and financial stability.

He presented pretty remarkable results of a simulation study he conducted together with two of his students. The main goal was to introduce a “systemic risk index” (SRI) that quantifies the impact of an institution’s default on the financial systems through direct connection (i.e. counterparty credit risk) or indirect connection (i.e. seller of CDS). Based on that he compared the effect of risk mitigating techniques (i.e. limits on leverage, capital requirements) on the SRI. The simulation was based on random graphs constructed via preferential attachment, i.e., new nodes in the sytem tend to connect to the better connected ones – the Matthew principle. The constructed graphs were structurally similar to the structure observed in real-world networks in Brazil and Austria. Running the risk of oversimplifying, the key insights were:

  1. The main message: It is not about being “too big to fail” but about being “too interconnected to fail”. In the presented study size was completely uncorrelated to the potential impact given default. That is especially interesting given that in the current discussion about the financial crisis, one prominent argumentation demands the split up of large financial institution. Assuming that the results are realistic, this would provide only minimal systemic risk mitigation but might increase the administrative overhead to monitor all these smaller units. Another consequence that might be even a bit more critical is the implied moral hazard. Whereas gaining a certain size in order to be “too big to fail” is a rather hard task, being “too interconnected to fail” is rather simple: Given the later described large impact of only a few CDSs it might suffice to buy and sell a lot of CDSs (or other structures) back-to-back  (i.e. you are long and short the same position and thus net flat)  in order to insure yourself against failure by weaving or implanting yourself deep into the financial network. (see also 2. below)
  2. Based on the real-world network that were studied, only a few hubs in the network constitute the largest proportion of potential damage. These are the ones that are highly connected. Thus a monitoring focused on these particular nodes that could be identified using the proposed SRI might already lead to a considerable mitigation of systemic risk.
  3. It does make a difference if you have a limit on leverage compared to capital requirements only. The impact of the worst nodes in the network considerable dropped in presence of limits on leverage (as for example in Canada employed).
  4. Comparing the situation with and without CDSs, the presence of only a few CDSs can change the dynamics of the default propagation dramatically by introducing “shortcuts” to the network – effects similar to the small world phenomenon.
  5. In the model at hand, it didn’t make a difference if CDS contracts were speculative or hedging instruments. Note, that was under the assumption that the overall number of contracts in the simulation remained constant and only the proportion were altered – otherwise under the mainstream assumption that more than 50% of all CDSs are speculative, removing those would reduce the number of contracts present by more than 50% and thus considerably reducing the risk through “shortcuts”.

Written by Sebastian

May 16, 2009 at 11:39 pm

Wolfram|Alpha is online now

leave a comment »

Wolfram|Alpha, the computational search engine, is online now. Check it out yourself –  it is definitely worth it.

A few examples that I tried:

More examples are available here.

Written by Sebastian

May 16, 2009 at 10:41 am

optimization extreme – squeezing 24 rooms into a 32 sqm flat

leave a comment »

Gary Chang, the award-winning Hong Kong architect managed to re-design his apartment in a way so that actually 24 rooms fit into his tiny Hong Kong flat. This is done by a system of shifting walls that can be easily rearranged for different setups. Check out the story at the NY Times (including a slideshow).

Written by Sebastian

May 10, 2009 at 9:26 am

Is poker a game of skill?

leave a comment »

I followed an interesting discussion on the Wall Street Journal website whether poker is a game of luck or rather skill. The discussion in this particular case was started after a study (sponsored by a major poker website) claims that success is merely determined by skill. The importance of the question in this case stems from the legal side: Games of chance are considered gambling under U.S. law.

Whether or not it is more a game of luck than skill is one question but in any case it fueled the development of a very important discipline in operations research, economics, and mathematics: game theory. John von Neumann, the father of game theory, was motivated by exactly the same question, i.e., is poker a game of skill or luck:

For Von Neumann, the inspiration for game theory was poker, a game he played occasionally and not terribly well. Von Neumann realized that poker was not guided by probability theory alone, as an unfortunate player who would use only probability theory would find out. Von Neumann wanted to formalize the idea of “bluffing,” a strategy that is meant to deceive the other players and hide information from them.

In his 1928 article, “Theory of Parlor Games,” Von Neumann first approached the discussion of game theory, and proved the famous Minimax theorem. From the outset, Von Neumann knew that game theory would prove invaluable to economists. He teamed up with Oskar Morgenstern, an Austrian economist at Princeton, to develop his theory. [more]

On the other hand, several poker experts question that claim suggesting a domination of luck. Probably this question is not going to be settled soon but what would be interesting to know is if there is a high correlation between unsuccessful poker players and the belief that it is a game of luck and successful poker players and the belief that it is a game of skill – some kind of a survivor bias. 😉

Written by Sebastian

May 8, 2009 at 4:05 pm

Ask an engineer

leave a comment »

Did you ever wonder “why can’t fusion energy solve the global energy crisis?” or “why hasn’t commercial air travel gotten any faster since the 1960s?”, then check out MIT’s ask an engineer – a lot of answers to a lot of questions.

Written by Sebastian

May 7, 2009 at 10:31 pm