Sebastian Pokutta's Blog

Mathematics and related topics

Archive for the ‘Risk Management’ Category

Long time no see

with one comment

It has been quite a while that I wrote my last blog post; the last one that really counts (at least to me) was back in February. As pointed out at some point it was not that I was lacking something to write about but more that I did not want to “touch” certain topics. That in turn made me wonder what a blog is good for when, in fact, one is still concerned about whether to write about certain topics. So I got the feeling that in the end, all this web 2.0 authenticity, all this being really open, direct, authentic, etc. is nothing else but a (self-) deception. On the other hand, I also did not feel like writing about yet another conference. I have to admit that I have been to some really crappy conferences lately and since I did not have anything positive to say I preferred to not say anything at all. There were a few notable exceptions, e.g., the MIP or IPCO. Another thing that bothered me (and still does) is the dilution of real information with non-sense. In fact I have the feeling that the signal-to-noise ratio considerably dropped over the last two years and I didn’t want to add to this further. This feeling of over-stimulation with web 2.0 noise seems to be a global trend (at least this is my perception). Many people gave up their blogs or have been somewhat neglecting them. Also maintaining a blog with say weekly posts (apart from passing on a few links or announcements) takes up a lot of time. Time that arguably could be better invested into doing research and writing papers.

Despite those issues or concerns I do believe that the web with all its possibilities can really enhance the way we do science. As with all new technologies one has to find a modus operandi that provides positive utility. In principle the web can provide an information democracy/diversification, however any “democratic endeavor” on the web has a huge enemy. The Matthew effect (or commonly known as “more gains more”). This term, coined by R.K. Merton, derives its name from the following biblical Gospel of Matthew (see also wikipedia):

For to all those who have, more will be given, and they will have an abundance; but from those who have nothing, even what they have will be taken away. — Matthew 25:29, New Revised Standard Version

In principle it states that the “rich get richer” while the “poor get poorer”. If we think of the different social networks (facebook, myspace, friendster) it refers to the effect that the one that has the largest user basis is going to attract more people than the one with a smaller one. In the next “round” this effect is then even more pronounced until the smaller competitor virtually ceases to exist. In the real-world this effect is often limited due to various kinds of “friction”. There might be geographic limitations, cultural barriers etc., that do wash out the advantage of the larger one so that the compounding nature of the effect is slowed down or non-existent (this hold even true in the highly globalized world we live in). That is the reason why dry cleaners, bakeries, and other forms of local business are not outperformed by globalized companies (ok, some are). In the context of the internet however there is often no inhibitor to the Matthew effect. It often translates into some type of preferential attachment although with the difference that the overall user basis is limited so that the gain of one party is the loss of another (preferential attachment processes are usually not zero-sum).

So what does this mean in the context of blogs? Blog reading is to a certain extent zero-sum. There is some limited amount of time that we are willing to spend reading blogs. Those with a large user basis will have more active discussions and move higher in the priority list for reading. In the end the smaller ones might only have a handful of readers making it hard to justify the amount of time spent writing the posts. Downscaling the frequency of posts might even pronounce the effect as it might be perceived as inactivity. One way out of this dilemma could be any form of joining the smaller units to larger ones, i.e., either “digesting” several blogs to a larger one or alternatively “shared blogging”. I haven’t made up my mind yet what (if!) I am going to do about this. But I guess, in the end some type of consolidation is inevitable.

Having bothered you with this abstruse mixture of folklore, economics, and internet, I actually intended to write about something else but somewhat related today: About deciding whether and when to dump a project. This problem is very much inspired by my previous experiences as a consultant and recent decisions about academic projects. More precisely, suppose that you have a project and you have an estimate for the overall time of the project. At some point you want to review the progress and based on what you see at this point you want to make a call whether or not you will abandon the project. The longer you wait with your review the better your information is that you gain from the review. On the other hand you potentially wasted too much time and resources to increase the confidence in your decision. In fact it might make even sense you not start a project at all. Suppose that you have an a priori estimate for the probability of success of your project, say p. Further let r(t) denote our function of erring, i.e., r(0) = 1/2 and r(1) = 0 which means that at time t= 0 we do not have any information yet and so we can only guess leading to guessing wrong with probability 50% and at time t = 1 we have perfect information. Let t denote the point in time at which we review the project (as a fraction of the overall time, here assumed to be 1). We have four cases to consider (one might opt for a different payoff function; the following one resembles my particular choice):

  1. The project is going to be successful and at the point of reviewing we guessed right, i.e., we went through with it. In this case the benefit is s. This happens with probability (1-r(t)) p and expected payoff for this scenario is: (1-r(t)) p s. [alternatively one could consider the benefit s – t; or something else]
  2. The project is going to be successful and at the point of reviewing we guessed wrong, i.e., we dropped the project. In this case the benefit is – (t + s), i.e., we lose our investment up to that point (here with unit value) and the overall benefit. Probability is r(t) p and expected payoff – r(t) p (t+s).
  3. The project is going to fail and we guessed right: Benefit -t, i.e., the investment so far. Expected payoff – (1-r(t)) (1-p) t.
  4. The project is going to fail and we guessed wrong, i.e., we went through with it: Benefit -T, were T is some cost for this scenario. Expected payoff – r(t) (1-p) T.

All in all we have the following expected overall payoff as a function of t:

\mathbb E(t) = -[(1-r(t))p (-s) + (1-r(t))(1-p) t + r(t)p(t+s) + r(t)(1-p) T]

Next we have to define our function which models our increase in confidence. I opted for a function that gains information in a logarithm fashion, i.e., in the beginning we gain confidence fast and then we have a tailing-off effect:

r_k(t) := \frac{1}{2} \frac{(1 + \log(k)}{(-\log(k) + \log(1 + k)))} - \frac{\log(k + t)}{(2 (-\log(k) + \log(1 + k)))}

The parameter k can be understood as the rate of learning. For example for k = 0.01 it looks like this:


Assuming s = 1 and T = 1, i.e., the payoffs are basically the invested time and p = 30%, the expected payoff as function of the time of review t looks like this (payoff: blue line, error rate: red line):

The maximum payoff is reached for a review after roughly 20% of the estimated overall time. However it is still negative. This suggests that we do not learn fast enough to perform a well-informed decision. For example for k = 0.001, the situation looks different:

The optimal point for a review is after 14% of the estimated project time. Having once estimated your rate of learning, one can also determine which projects one should not get involved with at all. For k = 0.001 this is the case when the probability of success p is less than roughly 27%.

Although this model is somewhat very simple it provides some nice qualitative (and partly quantitative) insights. For example that there are indeed projects that you should not even get involved with; this is somewhat clear from intuition but I was surprised that the probability of success of those is still quite high. Also, when over time your rate of learning increases (due to experience with other projects) you can get involved with more risky endeavors because your higher review confidence allows you to purge more effectively. For example when k goes down to, say, k = 0.00001 (which is not unrealistic as in this case shortly after the beginning of the project you would only err with around 20%) you could get involved with projects that only have a probability of success of 19%.

And no complaints about the abrupt ending – I consumed my allocated blogging time.

Let us have a securitization party

with one comment

The concept of securitization is very versatile. From Wikipedia:

Securitization is a structured finance process that distributes risk by aggregating debt instruments in a pool, then issues new securities backed by the pool. The term “Securitisation” is derived from the fact that the form of financial instruments used to obtain funds from the investors are securities. As a portfolio risk backed by amortizing cash flows – and unlike general corporate debt – the credit quality of securitized debt is non-stationary due to changes in volatility that are time- and structure-dependent. If the transaction is properly structured and the pool performs as expected, the credit risk of all tranches of structured debt improves; if improperly structured, the affected tranches will experience dramatic credit deterioration and loss. All assets can be securitized so long as they are associated with cash flow. Hence, the securities which are the outcome of Securitisation processes are termed asset-backed securities (ABS). From this perspective, Securitisation could also be defined as a financial process leading to an issue of an ABS.

The cash flows of the initial assets are paid according to seniority of the tranches in a waterfall-like structure: First the claims of the most senior tranche are satisfied and if there are remaining cash flows, the claims of the following tranche are satisfied. This continues as long as there are cash-flows left to cover claims:

Individual securities are often split into tranches, or categorized into varying degrees of subordination. Each tranche has a different level of credit protection or risk exposure than another: there is generally a senior (“A”) class of securities and one or more junior subordinated (“B,” “C,” etc.) classes that function as protective layers for the “A” class. The senior classes have first claim on the cash that the SPV receives, and the more junior classes only start receiving repayment after the more senior classes have repaid. Because of the cascading effect between classes, this arrangement is often referred to as a cash flow waterfall. In the event that the underlying asset pool becomes insufficient to make payments on the securities (e.g. when loans default within a portfolio of loan claims), the loss is absorbed first by the subordinated tranches, and the upper-level tranches remain unaffected until the losses exceed the entire amount of the subordinated tranches. The senior securities are typically AAA rated, signifying a lower risk, while the lower-credit quality subordinated classes receive a lower credit rating, signifying a higher risk.

In more mathematical terms, securitization basically works as follows: take your favorite set of random variables (for the sake of simplicity say binary ones) and consider the joint distribution of these variables (pooling). In a next step determine percentiles of the joint distribution (of default, i.e. 0) that you sell of separately (tranching). The magic happens via the law of large numbers and the central limit theorem (and variants of it): although each variable can have a high probability of default, the probability that more than, say x% of those default at the same time decreases (almost) exponentially. Thus the resulting x-percentile can have a low probability of default already for small x. That is the magic behind securitization which is called credit enhancement.

So given that this process of risk mitigation and tailoring of risks to the risk appetite of potential investors is rather versatile, why not applying the same concept to other cash flows that bear a certain risk of default and turn them into structured products 😉

(a) Rents: Landlords face the problem that the tenant’s credit quality is basically unknown. Often, a statement about the tenant’s income and liabilities should help to better estimate the risk of default. But this procedure can, at best, serve as an indicator. So why not using the same process to securitize the rent cash flows and sell the corresponding tranches back to the landlords. This would have several upsides. First of all, the landlord obtains a significantly more stable cash flow and depending on the risk appetite could even invest in the more subordinated tranches. This could potentially reduce rents as the risk premium charged by the landlord due to his/her potentially risk averse preference could be reduced to the risk neutral amount (plus some spreads, e.g., operational and structuring costs). The probability of default could be significantly easier estimated for the pooled rent cash flows as due to diversification it is well approximated by the expected value (maybe categorized into subclasses according to credit ratings). Of course, one would have to deal with problems such as adverse selection and the potentially hard task to estimate the correlation – which can have a severe impact on the value of the tranches (see my post here).

(b) Sport bets: Often these bets as random variables have a high probability of default, e.g., roughly 50% for a balanced win/loss bet). In order to reduce the risk due to diversification a rather large amount of cash has to be invested to obtain a reasonable risk profile. Again, securitizing those cash flows could create securities with more tailored risk profiles that could be of interest to people that are rather risk averse on the one hand and risk affine gamblers on the other hand.

(c) …

That is the wonderful world of structured finance 😉

Written by Sebastian

December 30, 2009 at 2:35 pm

Heading off to AFBC 2009

with one comment

I am on my way to the 22nd Australasian Finance and Banking Conference 2009 in Sydney. So, what the hell, is a mathematician doing on a finance conference? Well, basically mathematics and in particular optimization and operations research. I am thrilled to see the current developments in economics and finance that take computational aspects, which ultimately limit the amount of rationality that we can get, into account (I wrote about this before here, here, and here). In fact, I am convinced that these aspects will play an important role in the future, especially for structured products. After all, who is going to buy a structure where it is impossible to compute the value? Not even to talk about other complications such as bad data or dangerous model assumptions (such as static volatilities and correlations which are still used today!). Most valuation problems though can be cast as optimization problems and especially the more complex structured products (e.g., mean variance optimizer) do explicitly ask for a solution to an optimization problem in order to be valuated. For the easier structures, Monte Carlo based approaches (or bi-/trinomial trees) are sufficient for pricing. As Arora, Barak, Brunnermeier, and Ge show in their latest paper, for more complex structures (e.g., CDOs) these approaches might fall short capturing the real value of the structures, due to e.g., deliberate tampering.

I am not going to talk about aspect of computational resources though: I will be talking about my paper “Optimal Centralization of Liquidity Management” which is joined work with Christian Schmaltz from the Frankfurt School of Finance and Management. The problem that we are considering is basically a facility location problem: In a large banking network, where and how do you manage liquidity? In a centralized liquidity hub or rather in smaller liquidity centers spread all over the network. Being short on liquidity is a very expensive matter, either one has to borrow money via the interbank market (which is usually dried up or at least tight in tougher economical conditions) or one has to borrow via the central bank. If both is not available, the bank goes into a liquidity default. The important aspect here is that the decision on the location and the amount of liquidity produced, is driven to a large extent by the liquidity demand volatility. In this sense a liquidity center turns into an option on cheap liquidity and in fact, the value of a liquidity center can be actually captured in an option framework. The value of the liquidity center is the price of the exact demand information – the more volatility we have, the higher this price will be and the more we save when we have this information in advance. The derived liquidity center location problem implicitly computes the prices of the options which arise as marginal costs in the optimization model. Here are the slides:

View this document on Scribd

Written by Sebastian

December 13, 2009 at 12:17 pm

Are twitter and friends increasing volatility in the market?

leave a comment »

Recently browsing the internet I found google insights, somewhat the bigger brother of google trends. There you can compare not only the trends of certain words but you can also split the results into various time / location buckets and compare them. For example you can compare the searches run in the US for “carribean” to the ones for “recession” from Jan 2007 until today resulting in something like this (blue -> carribean / red -> recession):

Picture 2

One can see that queries for “carribean” already started to drop in Jan 2008 and dropped significantly further in Sep 2008 while the ones for recession started to significantly rise in Aug/Sep 2008. In hindsight it is easy to see patterns – just search long enough – and it is not clear if they constitute any correlation.

Further, while interesting for a lot of applications, historical information is not well suited for making predictions. But there are also other services such as twitter and facebook out there where users pour in tons of data in real time. Especially the latter can be easily searched in real time for trends and phrases as well. New information is quickly propagated through the network and made available to millions of people combing for specific phrases such as, e.g., “oil” or “oil price”. The following trend search is from Twist, a twitter trend service. For any point in time a click reveals the post written – everything updated in real time:

Picture 3

Now, having information on price changes and “market research” available even faster and more immediate than ever before (and not only for those with Bloomberg or Reuters access) one might suspect that the volatility in the market increases as people might act more impulsively and emotionally (as often claimed e.g., in behavorial finance) especially if prices go down. Having a delay in the information processing chains smoothens the trading behavior, effectively reducing volatility. If these delays are reduced to instanteneous information availability (short term) volatility increases.

Another, maybe even more critical problem could be that using twitter and other mass-publication-of-micro-information services the pump-and-dump strategies for microcaps, which are illegal (under most jurisdictions), can be performed even more effectively than ever before. As spam filters got more and more effective pump-and-dump via spamming got harder and harder. But with micro-blogging the whole story changes. By definition there are no spammers as you follow somebody and you do not get unsolicitated emails/spam. Due to this there might be some special legal issues here that deserve extra attention: When somebody is writing a tweet to spread wrong information concealed as “personal opinion” and millions eavesdrop, can the person be held responsibility if wrong information leads, e.g., to a fire sale? The story goes even a bit further: other people might re-tweet or copy the story multiplying the number of readers and adding credibility to it as more and more versions of it are out there (a turbo-charged version of the Matthew effect and its generalizations) – who is going to reconstruct the time line when money is at stake and a decision has to be taken now?

Probably soon hedge funds will pop up trading these noises by mining millions of tweets for signals trying to extract some cash from the market.

Written by Sebastian

July 24, 2009 at 8:53 pm

Rama Cont on contagious default and systemic risk

with 2 comments

A few days ago (May, 14th) Rama Cont from Columbia gave a very interesting talk at the Frankfurt School of Finance & Management about contagious default and systemic risk in financial networks. From the abstract:

The ongoing financial crisis has simultaneously underlined the importance of systemic risk and the lack of adequate indicators for measuring and monitoring it. After describing some important structural features of banking networks, we propose an indicator for measuring the systemic impact of the failure of a financial institution –the Systemic Risk Index– which combines a traditional factor-based modeling of financial risks with network contagion effects resulting from mutual exposures. Simulation studies on networks with realistic structures -in particular using data from the Brazilian interbank network- underline the importance of network structure in assessing financial stability and point to the importance of leverage and liquidity ratios of financial institutions as tools for monitoring and controling systemic risk. In particular, we investigate the role played by credit default swap contracts and their impact on financial stability and systemic risk. Out study leads to some policy implications for a more efficient monitoring of systemic risk and financial stability.

He presented pretty remarkable results of a simulation study he conducted together with two of his students. The main goal was to introduce a “systemic risk index” (SRI) that quantifies the impact of an institution’s default on the financial systems through direct connection (i.e. counterparty credit risk) or indirect connection (i.e. seller of CDS). Based on that he compared the effect of risk mitigating techniques (i.e. limits on leverage, capital requirements) on the SRI. The simulation was based on random graphs constructed via preferential attachment, i.e., new nodes in the sytem tend to connect to the better connected ones – the Matthew principle. The constructed graphs were structurally similar to the structure observed in real-world networks in Brazil and Austria. Running the risk of oversimplifying, the key insights were:

  1. The main message: It is not about being “too big to fail” but about being “too interconnected to fail”. In the presented study size was completely uncorrelated to the potential impact given default. That is especially interesting given that in the current discussion about the financial crisis, one prominent argumentation demands the split up of large financial institution. Assuming that the results are realistic, this would provide only minimal systemic risk mitigation but might increase the administrative overhead to monitor all these smaller units. Another consequence that might be even a bit more critical is the implied moral hazard. Whereas gaining a certain size in order to be “too big to fail” is a rather hard task, being “too interconnected to fail” is rather simple: Given the later described large impact of only a few CDSs it might suffice to buy and sell a lot of CDSs (or other structures) back-to-back  (i.e. you are long and short the same position and thus net flat)  in order to insure yourself against failure by weaving or implanting yourself deep into the financial network. (see also 2. below)
  2. Based on the real-world network that were studied, only a few hubs in the network constitute the largest proportion of potential damage. These are the ones that are highly connected. Thus a monitoring focused on these particular nodes that could be identified using the proposed SRI might already lead to a considerable mitigation of systemic risk.
  3. It does make a difference if you have a limit on leverage compared to capital requirements only. The impact of the worst nodes in the network considerable dropped in presence of limits on leverage (as for example in Canada employed).
  4. Comparing the situation with and without CDSs, the presence of only a few CDSs can change the dynamics of the default propagation dramatically by introducing “shortcuts” to the network – effects similar to the small world phenomenon.
  5. In the model at hand, it didn’t make a difference if CDS contracts were speculative or hedging instruments. Note, that was under the assumption that the overall number of contracts in the simulation remained constant and only the proportion were altered – otherwise under the mainstream assumption that more than 50% of all CDSs are speculative, removing those would reduce the number of contracts present by more than 50% and thus considerably reducing the risk through “shortcuts”.

Written by Sebastian

May 16, 2009 at 11:39 pm

Of couples and copulas

leave a comment »

… is the title of a great article in the Financial Times from 04/29/09 (thx for pointing me to that one). It is about the origins of the copula formula and its connection to actuarial science where a similar problem, to estimate or quantify the remaining life expectancy in couples, occurs – more specifically the remaining life expectancy of the left-behind partner conditional on the other one being dead. The broken heart syndrom:

Pages and pages of death records showed the same marked trend: that in human couples, the death of one partner significantly increases the chances of the death of the other. Dying of a broken heart – in the most general sense, not necessarily from stress cardiomyopathy – was not a rare occurrence, but something of a statistical probability.

A nice story about 1001 correlations (covering everything from Black-Scholes-Merton, to LTCM, to Russian Bonds, etc) … Definitely worth reading.

Written by Sebastian

May 6, 2009 at 11:46 am

Don’t Blame The Elite?

leave a comment »

Another article worthwhile reading: “Don’t Blame The Elite” by Tim Harford.

For years, we were told that Wall Street attracted the very best. That was why American investment banks were the envy of the world; that was why stratospheric salaries and bonuses were essential. Other financial centers, such as London, fought tooth and nail to attract the same elite. They were worth it, we were told: If you pay peanuts, you get monkeys.


Written by Sebastian

May 2, 2009 at 2:53 pm