Sebastian Pokutta's Blog

Mathematics and related topics

Archive for the ‘economics’ Category

Long time no see

with one comment

It has been quite a while that I wrote my last blog post; the last one that really counts (at least to me) was back in February. As pointed out at some point it was not that I was lacking something to write about but more that I did not want to “touch” certain topics. That in turn made me wonder what a blog is good for when, in fact, one is still concerned about whether to write about certain topics. So I got the feeling that in the end, all this web 2.0 authenticity, all this being really open, direct, authentic, etc. is nothing else but a (self-) deception. On the other hand, I also did not feel like writing about yet another conference. I have to admit that I have been to some really crappy conferences lately and since I did not have anything positive to say I preferred to not say anything at all. There were a few notable exceptions, e.g., the MIP or IPCO. Another thing that bothered me (and still does) is the dilution of real information with non-sense. In fact I have the feeling that the signal-to-noise ratio considerably dropped over the last two years and I didn’t want to add to this further. This feeling of over-stimulation with web 2.0 noise seems to be a global trend (at least this is my perception). Many people gave up their blogs or have been somewhat neglecting them. Also maintaining a blog with say weekly posts (apart from passing on a few links or announcements) takes up a lot of time. Time that arguably could be better invested into doing research and writing papers.

Despite those issues or concerns I do believe that the web with all its possibilities can really enhance the way we do science. As with all new technologies one has to find a modus operandi that provides positive utility. In principle the web can provide an information democracy/diversification, however any “democratic endeavor” on the web has a huge enemy. The Matthew effect (or commonly known as “more gains more”). This term, coined by R.K. Merton, derives its name from the following biblical Gospel of Matthew (see also wikipedia):

For to all those who have, more will be given, and they will have an abundance; but from those who have nothing, even what they have will be taken away. — Matthew 25:29, New Revised Standard Version

In principle it states that the “rich get richer” while the “poor get poorer”. If we think of the different social networks (facebook, myspace, friendster) it refers to the effect that the one that has the largest user basis is going to attract more people than the one with a smaller one. In the next “round” this effect is then even more pronounced until the smaller competitor virtually ceases to exist. In the real-world this effect is often limited due to various kinds of “friction”. There might be geographic limitations, cultural barriers etc., that do wash out the advantage of the larger one so that the compounding nature of the effect is slowed down or non-existent (this hold even true in the highly globalized world we live in). That is the reason why dry cleaners, bakeries, and other forms of local business are not outperformed by globalized companies (ok, some are). In the context of the internet however there is often no inhibitor to the Matthew effect. It often translates into some type of preferential attachment although with the difference that the overall user basis is limited so that the gain of one party is the loss of another (preferential attachment processes are usually not zero-sum).

So what does this mean in the context of blogs? Blog reading is to a certain extent zero-sum. There is some limited amount of time that we are willing to spend reading blogs. Those with a large user basis will have more active discussions and move higher in the priority list for reading. In the end the smaller ones might only have a handful of readers making it hard to justify the amount of time spent writing the posts. Downscaling the frequency of posts might even pronounce the effect as it might be perceived as inactivity. One way out of this dilemma could be any form of joining the smaller units to larger ones, i.e., either “digesting” several blogs to a larger one or alternatively “shared blogging”. I haven’t made up my mind yet what (if!) I am going to do about this. But I guess, in the end some type of consolidation is inevitable.

Having bothered you with this abstruse mixture of folklore, economics, and internet, I actually intended to write about something else but somewhat related today: About deciding whether and when to dump a project. This problem is very much inspired by my previous experiences as a consultant and recent decisions about academic projects. More precisely, suppose that you have a project and you have an estimate for the overall time of the project. At some point you want to review the progress and based on what you see at this point you want to make a call whether or not you will abandon the project. The longer you wait with your review the better your information is that you gain from the review. On the other hand you potentially wasted too much time and resources to increase the confidence in your decision. In fact it might make even sense you not start a project at all. Suppose that you have an a priori estimate for the probability of success of your project, say p. Further let r(t) denote our function of erring, i.e., r(0) = 1/2 and r(1) = 0 which means that at time t= 0 we do not have any information yet and so we can only guess leading to guessing wrong with probability 50% and at time t = 1 we have perfect information. Let t denote the point in time at which we review the project (as a fraction of the overall time, here assumed to be 1). We have four cases to consider (one might opt for a different payoff function; the following one resembles my particular choice):

  1. The project is going to be successful and at the point of reviewing we guessed right, i.e., we went through with it. In this case the benefit is s. This happens with probability (1-r(t)) p and expected payoff for this scenario is: (1-r(t)) p s. [alternatively one could consider the benefit s – t; or something else]
  2. The project is going to be successful and at the point of reviewing we guessed wrong, i.e., we dropped the project. In this case the benefit is – (t + s), i.e., we lose our investment up to that point (here with unit value) and the overall benefit. Probability is r(t) p and expected payoff – r(t) p (t+s).
  3. The project is going to fail and we guessed right: Benefit -t, i.e., the investment so far. Expected payoff – (1-r(t)) (1-p) t.
  4. The project is going to fail and we guessed wrong, i.e., we went through with it: Benefit -T, were T is some cost for this scenario. Expected payoff – r(t) (1-p) T.

All in all we have the following expected overall payoff as a function of t:

\mathbb E(t) = -[(1-r(t))p (-s) + (1-r(t))(1-p) t + r(t)p(t+s) + r(t)(1-p) T]

Next we have to define our function which models our increase in confidence. I opted for a function that gains information in a logarithm fashion, i.e., in the beginning we gain confidence fast and then we have a tailing-off effect:

r_k(t) := \frac{1}{2} \frac{(1 + \log(k)}{(-\log(k) + \log(1 + k)))} - \frac{\log(k + t)}{(2 (-\log(k) + \log(1 + k)))}

The parameter k can be understood as the rate of learning. For example for k = 0.01 it looks like this:


Assuming s = 1 and T = 1, i.e., the payoffs are basically the invested time and p = 30%, the expected payoff as function of the time of review t looks like this (payoff: blue line, error rate: red line):

The maximum payoff is reached for a review after roughly 20% of the estimated overall time. However it is still negative. This suggests that we do not learn fast enough to perform a well-informed decision. For example for k = 0.001, the situation looks different:

The optimal point for a review is after 14% of the estimated project time. Having once estimated your rate of learning, one can also determine which projects one should not get involved with at all. For k = 0.001 this is the case when the probability of success p is less than roughly 27%.

Although this model is somewhat very simple it provides some nice qualitative (and partly quantitative) insights. For example that there are indeed projects that you should not even get involved with; this is somewhat clear from intuition but I was surprised that the probability of success of those is still quite high. Also, when over time your rate of learning increases (due to experience with other projects) you can get involved with more risky endeavors because your higher review confidence allows you to purge more effectively. For example when k goes down to, say, k = 0.00001 (which is not unrealistic as in this case shortly after the beginning of the project you would only err with around 20%) you could get involved with projects that only have a probability of success of 19%.

And no complaints about the abrupt ending – I consumed my allocated blogging time.

Information asymmetry and beating the market

leave a comment »

Recently, I was wondering how much money you can effectively gain by investing, given a certain information advantage: Suppose that you want to invest some money, for the sake of simplicity say $10,000. Can you assume to be able to extract an average-exceeding return from the market given that you have an information advantage? If you believe in the strong form of the efficient market hypothesis then the answer is no of course. If not, then is it at least theoretically possible?

Let us consider a simplified setting. Suppose that we can invest (long/short) in a digital security (e.g., digital options) with payouts 0 and 1 (with a price of 0.5) and let us further suppose that it pays out 1 with a probability of 50%. Now assume that we have a certain edge over the market, i.e., we can predict the outcome slightly more accurately, say with (50+edge\%) accuracy. If we have a good estimate of our edge, we can use the Kelly Criterion to allocate our money. The Kelly Criterion, named after John L. Kelly, Jr determines the proportional amount of money to bet from own bankroll so that the overall utility is maximized – this criterion is provably optimal. It was presented by Kelly in his seminal 1956 paper “A New Interpretation of Information Rate“. In this paper Kelly links the channel capacity of a private wire (inside information) to the maximum amount of return that one can extract from a bet. While this bound is a theoretical upper bound, it is rather strong in its negative interpretation: If you do not have any inside information (which includes being just smarter than everybody else or other intangible edges) you cannot extract any cash. The Kelly Criterion arises as an optimal money management strategy derived from the link to Shannon‘s Information Theory and in its simplest form it can be stated as:

f = \frac{bp-q}{b},

where b:1 are the odds, p the probability to win, and q = 1-p the probability to lose. So in our setting, where we basically consider fair coin tosses whose outcomes we can predict with (50+edge\%) accuracy, an edge of 1% or 100bps is considerable. Using the money management strategy from above (neglecting taxes, transaction fees, etc.), we obtain:

Edge of 100bp

100bp edge

with an initial bankroll of $10,000, y-axis is log10(bankroll), x-axis is #bets. The five lines belong to the %5, 25%, 50%, 75%, and 95% percentiles computed on the basis of 5,000 Monte-Carlo runs. So even the 5% percentile sees a ten-fold increase of the bankroll after roughly 4,100 bets, whereas the 95% percentile is already at a 100-fold increase. In terms of real deals the number of bets is already considerable though — after all, which private investor does 4,000 transactions??

Unfortunately, an edge of 100bp is very optimistic and for, say, for 50bp edge the situation already looks a quite different: the 50% percentile barely reaches a ten-fold increase after 10,000 bets.

50bp edge

And now let us come to the more realistic scenario when considering financial markets. Here an edge of 10bp is already considered significant. Given all the limitations as a private investor, i.e., being further down the information chain, sub-optimal market access, etc., assuming an edge of 10bp would be still rather optimistic. In this case, using an optimal allocation of funds, we have the following:

10bp edge

Here the 25% percentile actually lost money and even the 50% percentile barely gained anything over 10,000 bets. In the long run also here a strictly positive growth occurs, but for 10bp it takes extremely long: While you might be able do 4,000 deals over the course of say 10 – 30 years. Here even after 100,000 bets the 5% percentile barely reaches a 29% gain (over 100,000 bets!!). Given transaction costs, taxes, fees, etc., in reality the situation looks worse (especially when considered more complicated financial structures). So it comes all down to the question, how large your edge is.

Although extremely simplified here, a similar behavior can be shown for more complicated structures (using e.g., random walks).

Written by Sebastian

January 17, 2010 at 8:04 pm

Let us have a securitization party

with one comment

The concept of securitization is very versatile. From Wikipedia:

Securitization is a structured finance process that distributes risk by aggregating debt instruments in a pool, then issues new securities backed by the pool. The term “Securitisation” is derived from the fact that the form of financial instruments used to obtain funds from the investors are securities. As a portfolio risk backed by amortizing cash flows – and unlike general corporate debt – the credit quality of securitized debt is non-stationary due to changes in volatility that are time- and structure-dependent. If the transaction is properly structured and the pool performs as expected, the credit risk of all tranches of structured debt improves; if improperly structured, the affected tranches will experience dramatic credit deterioration and loss. All assets can be securitized so long as they are associated with cash flow. Hence, the securities which are the outcome of Securitisation processes are termed asset-backed securities (ABS). From this perspective, Securitisation could also be defined as a financial process leading to an issue of an ABS.

The cash flows of the initial assets are paid according to seniority of the tranches in a waterfall-like structure: First the claims of the most senior tranche are satisfied and if there are remaining cash flows, the claims of the following tranche are satisfied. This continues as long as there are cash-flows left to cover claims:

Individual securities are often split into tranches, or categorized into varying degrees of subordination. Each tranche has a different level of credit protection or risk exposure than another: there is generally a senior (“A”) class of securities and one or more junior subordinated (“B,” “C,” etc.) classes that function as protective layers for the “A” class. The senior classes have first claim on the cash that the SPV receives, and the more junior classes only start receiving repayment after the more senior classes have repaid. Because of the cascading effect between classes, this arrangement is often referred to as a cash flow waterfall. In the event that the underlying asset pool becomes insufficient to make payments on the securities (e.g. when loans default within a portfolio of loan claims), the loss is absorbed first by the subordinated tranches, and the upper-level tranches remain unaffected until the losses exceed the entire amount of the subordinated tranches. The senior securities are typically AAA rated, signifying a lower risk, while the lower-credit quality subordinated classes receive a lower credit rating, signifying a higher risk.

In more mathematical terms, securitization basically works as follows: take your favorite set of random variables (for the sake of simplicity say binary ones) and consider the joint distribution of these variables (pooling). In a next step determine percentiles of the joint distribution (of default, i.e. 0) that you sell of separately (tranching). The magic happens via the law of large numbers and the central limit theorem (and variants of it): although each variable can have a high probability of default, the probability that more than, say x% of those default at the same time decreases (almost) exponentially. Thus the resulting x-percentile can have a low probability of default already for small x. That is the magic behind securitization which is called credit enhancement.

So given that this process of risk mitigation and tailoring of risks to the risk appetite of potential investors is rather versatile, why not applying the same concept to other cash flows that bear a certain risk of default and turn them into structured products 😉

(a) Rents: Landlords face the problem that the tenant’s credit quality is basically unknown. Often, a statement about the tenant’s income and liabilities should help to better estimate the risk of default. But this procedure can, at best, serve as an indicator. So why not using the same process to securitize the rent cash flows and sell the corresponding tranches back to the landlords. This would have several upsides. First of all, the landlord obtains a significantly more stable cash flow and depending on the risk appetite could even invest in the more subordinated tranches. This could potentially reduce rents as the risk premium charged by the landlord due to his/her potentially risk averse preference could be reduced to the risk neutral amount (plus some spreads, e.g., operational and structuring costs). The probability of default could be significantly easier estimated for the pooled rent cash flows as due to diversification it is well approximated by the expected value (maybe categorized into subclasses according to credit ratings). Of course, one would have to deal with problems such as adverse selection and the potentially hard task to estimate the correlation – which can have a severe impact on the value of the tranches (see my post here).

(b) Sport bets: Often these bets as random variables have a high probability of default, e.g., roughly 50% for a balanced win/loss bet). In order to reduce the risk due to diversification a rather large amount of cash has to be invested to obtain a reasonable risk profile. Again, securitizing those cash flows could create securities with more tailored risk profiles that could be of interest to people that are rather risk averse on the one hand and risk affine gamblers on the other hand.

(c) …

That is the wonderful world of structured finance 😉

Written by Sebastian

December 30, 2009 at 2:35 pm

Heading off to AFBC 2009

with one comment

I am on my way to the 22nd Australasian Finance and Banking Conference 2009 in Sydney. So, what the hell, is a mathematician doing on a finance conference? Well, basically mathematics and in particular optimization and operations research. I am thrilled to see the current developments in economics and finance that take computational aspects, which ultimately limit the amount of rationality that we can get, into account (I wrote about this before here, here, and here). In fact, I am convinced that these aspects will play an important role in the future, especially for structured products. After all, who is going to buy a structure where it is impossible to compute the value? Not even to talk about other complications such as bad data or dangerous model assumptions (such as static volatilities and correlations which are still used today!). Most valuation problems though can be cast as optimization problems and especially the more complex structured products (e.g., mean variance optimizer) do explicitly ask for a solution to an optimization problem in order to be valuated. For the easier structures, Monte Carlo based approaches (or bi-/trinomial trees) are sufficient for pricing. As Arora, Barak, Brunnermeier, and Ge show in their latest paper, for more complex structures (e.g., CDOs) these approaches might fall short capturing the real value of the structures, due to e.g., deliberate tampering.

I am not going to talk about aspect of computational resources though: I will be talking about my paper “Optimal Centralization of Liquidity Management” which is joined work with Christian Schmaltz from the Frankfurt School of Finance and Management. The problem that we are considering is basically a facility location problem: In a large banking network, where and how do you manage liquidity? In a centralized liquidity hub or rather in smaller liquidity centers spread all over the network. Being short on liquidity is a very expensive matter, either one has to borrow money via the interbank market (which is usually dried up or at least tight in tougher economical conditions) or one has to borrow via the central bank. If both is not available, the bank goes into a liquidity default. The important aspect here is that the decision on the location and the amount of liquidity produced, is driven to a large extent by the liquidity demand volatility. In this sense a liquidity center turns into an option on cheap liquidity and in fact, the value of a liquidity center can be actually captured in an option framework. The value of the liquidity center is the price of the exact demand information – the more volatility we have, the higher this price will be and the more we save when we have this information in advance. The derived liquidity center location problem implicitly computes the prices of the options which arise as marginal costs in the optimization model. Here are the slides:

View this document on Scribd

Written by Sebastian

December 13, 2009 at 12:17 pm

The impact of estimation errors on CDO pricing

with 2 comments

Another interesting, nicely written paper about valuating and pricing CDOs is “The Economics of Structured Finance” from Coval, Jurek, and Stafford which just appeared in the Journal of Economic Perspectives. It nicely complements the paper of Arora, Barak, Brunnermeier, and Ge titled “Computational Complexity and Information Asymmetry in Financial Products” (see also here). The authors argue that already small estimation errors in correlation and probability of default (of the underlying loans) can have devastating effect on the overall performance of a tranche. Whereas the senior tranches remain quite stable in the presence of estimation errors, the overall rating of the junior and mezzanine tranches can be greatly affected. Intuitively this is clear, as the junior and the mezzanine tranches act as a cushion for the senior tranches (and in turn the junior tranches are a protection of the mezzanine tranches). What is not so clear though at first is that this effect is so pronounced, i.e., smallest estimation errors lead to a rapid decline in credit quality of these tranches. In fact, what happens here is that the junior and mezzanine tranches pay the price for the credit enhancement of the senior tranches. And the stability of the latter with respect to estimation errors comes at the expense of highly sensitive junior and mezzanine tranches.

This effects becomes even more severe when considering CDO^2, where the loans of the junior and mezzanine tranches are repackaged again. These structures possess a very high sensitivity to slightest variations or estimation errors in the probability of default or correlation.

In both cases, slight impressions in the estimation can have severe impacts. But also, considering it the other way around, slight changes in the probability of default or the correlation due to changed economic conditions can have devastating effect on the value of the lower prioritized tranches.

So if you are interested in CDOs, credit enhancement, and structured finance you should give it a look.


Written by Sebastian

December 6, 2009 at 4:32 pm

Arms race in quantitative trading or not?

with 2 comments

Rick Bookstaber recently argued that the arms race in high frequency trading, a form of quantitative trading where effectively time = money ;-), results in a net drain of social welfare:

A second reason is that high frequency trading is embroiled in an arms race. And arms races are negative sum games. The arms in this case are not tanks and jets, but computer chips and throughput. But like any arms race, the result is a cycle of spending which leaves everyone in the same relative position, only poorer. Put another way, like any arms race, what is happening with high frequency trading is a net drain on social welfare.

It is all about milliseconds and being a tiny little bit faster:

In terms of chips, I gave a talk at an Intel conference a few years ago, when they were launching their newest chip, dubbed the Tigerton. The various financial firms who had to be as fast as everyone else then shelled out an aggregate of hundreds of millions of dollar to upgrade, so that they could now execute trades in thirty milliseconds rather than forty milliseconds – or whatever, I really can’t remember, except that it is too fast for anyone to care were it not that other people were also doing it. And now there is a new chip, code named Nehalem. So another hundred million dollars all around, and latency will be dropped a few milliseconds more.

In terms of throughput and latency, the standard tricks are to get your servers as close to the data source as possible, use really big lines, and break data into little bite-sized packets. I was speaking at Reuters last week, and they mentioned to me that they were breaking their news flows into optimized sixty byte packets for their arms race-oriented clients, because that was the fastest way through network. (Anything smaller gets queued by some network algorithms, so sixty bytes seems to be the magic number).

Although high-frequency trading is basically about being fast and thus time is the critical resource, in quantitative trading, in general, it is all about computational resources and having the best/smartest ideas and strategies. The best strategy is worthless if you lack the computational resources to crunch the numbers and, vice versa, if you do have the computational power but no smart strategies this does not get you anywhere either.

Jasmina Hasanhodzic, Andrew W. Lo, Emanuele Viola argue in their latest paper “A Computational View of Market Efficiency” that efficiency in markets has to be considered with respect to the level of computational sophistication, i.e., as market can (appear to) be efficient for those participants which use only a low level of computational resources, whereas it can be inefficient for those participants that invest a higher amount of computational resources.

In this paper we suggest that a reinterpretation of market efficiency in computational terms might be the key to reconciling this theory with the possibility of making profits based on past prices alone. We believe that it does not make sense to talk about market efficiency without taking into account that market participants have bounded resources. In other words, instead of saying that a market is “efficient” we should say, borrowing from theoretical computer science, that a market is efficient with respect to resources S, e.g., time, memory, etc., if no strategy using resources S can generate a substantial profit. Similarly, we cannot say that investors act optimally given all the available information, but rather they act optimally within their resources. This allows for markets to be efficient for some investors, but not for others; for example, a computationally powerful hedge fund may extract profits from a market which looks very efficient from the point of view of a day-trader who has less resources at his disposal—arguably the status quo.

More precisely, it is even argued that the high-complexity traders gain from the low-complexity traders (of course, within the studied, simplified market model – but nonetheless!!):

The next claim shows a pattern where a high-memory strategy can make a bigger profit after a low-memory strategy has acted and modified the market pattern. This profit is bigger than the profit that is obtainable by a high-memory strategy without the low-memory strategy acting beforehand, and even bigger than the profit obtainable after another high- memory strategy acts beforehand. Thus it is precisely the presence of low-memory strategies that creates opportunities for high-memory strategies which were not present initially. This example provides explanation for the real-life status quo which sees a growing quantitative sophistication among asset managers.

Informally, the proof of the claim exhibits a market with a certain “symmetry.” For high-memory strategies, the best choice is to maintain the symmetry by profiting in multiple points. But a low-memory strategy will be unable to do. Its optimal choice will be to “break the symmetry,” creating new profit opportunities for high-memory strategies.

So although in pure high-frequency trading, the relevance of smart strategies might be smaller and thus it is more (almost only?) about speed, in general quantitative trading it seems like (again in the considered model) that the combination of strategy and high computational resources might generate a (longer-term) edge. This edge cannot necessarily be compensated with increased computational resources only, as you still need to have access to the strategy. The considered model considers memory as a the main computational/limiting resource. One might argue that it reflects the sophistication of the strategy along with the real computational resources implicitly, as limited memory might not be able to hold a complex strategy. On the other hand a lot of memory is pointless without a strategy using it. So both might be considered to be intrinsically linked.

An easy example illustrating this point is maybe the following. Consider the sequence “MDMD” and suppose that you can only store, say these 4 letters. A 4-letter-strategy might predict something like “MD” for the next two letters. If those letters though represent the initial of the weekdays (in German), the next 3 letters will be “FSS”. It is impossible though to predict this sequence solely using information about the past on the last 4 letters. The situation changes if we can store up to 7 letters “FSSMDMD”. Then a prediction is possible.

One point of the paper is now that the high-complexity traders might fuel their profits by the shortsightedness of the low-complexity traders. And thus an arms race might be a consequence (to exploit this asymmetry on the one hand and to protect against exploitation on the other). To some extent this is exactly what we are seeing already when traders with “sophisticated” models, that for example are capable of accounting for volatility skew, arbitrage out less sophisticated traders. On the other hand, it does not help to use a sophisticated model (i.e., more computational resources) if one doesn’t know how to use it, e.g., a Libor market model without an appropriate calibration (non-trivial) is worthless.

Written by Sebastian

September 1, 2009 at 8:38 pm

Rama Cont on contagious default and systemic risk

with 2 comments

A few days ago (May, 14th) Rama Cont from Columbia gave a very interesting talk at the Frankfurt School of Finance & Management about contagious default and systemic risk in financial networks. From the abstract:

The ongoing financial crisis has simultaneously underlined the importance of systemic risk and the lack of adequate indicators for measuring and monitoring it. After describing some important structural features of banking networks, we propose an indicator for measuring the systemic impact of the failure of a financial institution –the Systemic Risk Index– which combines a traditional factor-based modeling of financial risks with network contagion effects resulting from mutual exposures. Simulation studies on networks with realistic structures -in particular using data from the Brazilian interbank network- underline the importance of network structure in assessing financial stability and point to the importance of leverage and liquidity ratios of financial institutions as tools for monitoring and controling systemic risk. In particular, we investigate the role played by credit default swap contracts and their impact on financial stability and systemic risk. Out study leads to some policy implications for a more efficient monitoring of systemic risk and financial stability.

He presented pretty remarkable results of a simulation study he conducted together with two of his students. The main goal was to introduce a “systemic risk index” (SRI) that quantifies the impact of an institution’s default on the financial systems through direct connection (i.e. counterparty credit risk) or indirect connection (i.e. seller of CDS). Based on that he compared the effect of risk mitigating techniques (i.e. limits on leverage, capital requirements) on the SRI. The simulation was based on random graphs constructed via preferential attachment, i.e., new nodes in the sytem tend to connect to the better connected ones – the Matthew principle. The constructed graphs were structurally similar to the structure observed in real-world networks in Brazil and Austria. Running the risk of oversimplifying, the key insights were:

  1. The main message: It is not about being “too big to fail” but about being “too interconnected to fail”. In the presented study size was completely uncorrelated to the potential impact given default. That is especially interesting given that in the current discussion about the financial crisis, one prominent argumentation demands the split up of large financial institution. Assuming that the results are realistic, this would provide only minimal systemic risk mitigation but might increase the administrative overhead to monitor all these smaller units. Another consequence that might be even a bit more critical is the implied moral hazard. Whereas gaining a certain size in order to be “too big to fail” is a rather hard task, being “too interconnected to fail” is rather simple: Given the later described large impact of only a few CDSs it might suffice to buy and sell a lot of CDSs (or other structures) back-to-back  (i.e. you are long and short the same position and thus net flat)  in order to insure yourself against failure by weaving or implanting yourself deep into the financial network. (see also 2. below)
  2. Based on the real-world network that were studied, only a few hubs in the network constitute the largest proportion of potential damage. These are the ones that are highly connected. Thus a monitoring focused on these particular nodes that could be identified using the proposed SRI might already lead to a considerable mitigation of systemic risk.
  3. It does make a difference if you have a limit on leverage compared to capital requirements only. The impact of the worst nodes in the network considerable dropped in presence of limits on leverage (as for example in Canada employed).
  4. Comparing the situation with and without CDSs, the presence of only a few CDSs can change the dynamics of the default propagation dramatically by introducing “shortcuts” to the network – effects similar to the small world phenomenon.
  5. In the model at hand, it didn’t make a difference if CDS contracts were speculative or hedging instruments. Note, that was under the assumption that the overall number of contracts in the simulation remained constant and only the proportion were altered – otherwise under the mainstream assumption that more than 50% of all CDSs are speculative, removing those would reduce the number of contracts present by more than 50% and thus considerably reducing the risk through “shortcuts”.

Written by Sebastian

May 16, 2009 at 11:39 pm