Sebastian Pokutta's Blog

Mathematics and related topics

Archive for December 2009

Let us have a securitization party

with one comment

The concept of securitization is very versatile. From Wikipedia:

Securitization is a structured finance process that distributes risk by aggregating debt instruments in a pool, then issues new securities backed by the pool. The term “Securitisation” is derived from the fact that the form of financial instruments used to obtain funds from the investors are securities. As a portfolio risk backed by amortizing cash flows – and unlike general corporate debt – the credit quality of securitized debt is non-stationary due to changes in volatility that are time- and structure-dependent. If the transaction is properly structured and the pool performs as expected, the credit risk of all tranches of structured debt improves; if improperly structured, the affected tranches will experience dramatic credit deterioration and loss. All assets can be securitized so long as they are associated with cash flow. Hence, the securities which are the outcome of Securitisation processes are termed asset-backed securities (ABS). From this perspective, Securitisation could also be defined as a financial process leading to an issue of an ABS.

The cash flows of the initial assets are paid according to seniority of the tranches in a waterfall-like structure: First the claims of the most senior tranche are satisfied and if there are remaining cash flows, the claims of the following tranche are satisfied. This continues as long as there are cash-flows left to cover claims:

Individual securities are often split into tranches, or categorized into varying degrees of subordination. Each tranche has a different level of credit protection or risk exposure than another: there is generally a senior (“A”) class of securities and one or more junior subordinated (“B,” “C,” etc.) classes that function as protective layers for the “A” class. The senior classes have first claim on the cash that the SPV receives, and the more junior classes only start receiving repayment after the more senior classes have repaid. Because of the cascading effect between classes, this arrangement is often referred to as a cash flow waterfall. In the event that the underlying asset pool becomes insufficient to make payments on the securities (e.g. when loans default within a portfolio of loan claims), the loss is absorbed first by the subordinated tranches, and the upper-level tranches remain unaffected until the losses exceed the entire amount of the subordinated tranches. The senior securities are typically AAA rated, signifying a lower risk, while the lower-credit quality subordinated classes receive a lower credit rating, signifying a higher risk.

In more mathematical terms, securitization basically works as follows: take your favorite set of random variables (for the sake of simplicity say binary ones) and consider the joint distribution of these variables (pooling). In a next step determine percentiles of the joint distribution (of default, i.e. 0) that you sell of separately (tranching). The magic happens via the law of large numbers and the central limit theorem (and variants of it): although each variable can have a high probability of default, the probability that more than, say x% of those default at the same time decreases (almost) exponentially. Thus the resulting x-percentile can have a low probability of default already for small x. That is the magic behind securitization which is called credit enhancement.

So given that this process of risk mitigation and tailoring of risks to the risk appetite of potential investors is rather versatile, why not applying the same concept to other cash flows that bear a certain risk of default and turn them into structured products 😉

(a) Rents: Landlords face the problem that the tenant’s credit quality is basically unknown. Often, a statement about the tenant’s income and liabilities should help to better estimate the risk of default. But this procedure can, at best, serve as an indicator. So why not using the same process to securitize the rent cash flows and sell the corresponding tranches back to the landlords. This would have several upsides. First of all, the landlord obtains a significantly more stable cash flow and depending on the risk appetite could even invest in the more subordinated tranches. This could potentially reduce rents as the risk premium charged by the landlord due to his/her potentially risk averse preference could be reduced to the risk neutral amount (plus some spreads, e.g., operational and structuring costs). The probability of default could be significantly easier estimated for the pooled rent cash flows as due to diversification it is well approximated by the expected value (maybe categorized into subclasses according to credit ratings). Of course, one would have to deal with problems such as adverse selection and the potentially hard task to estimate the correlation – which can have a severe impact on the value of the tranches (see my post here).

(b) Sport bets: Often these bets as random variables have a high probability of default, e.g., roughly 50% for a balanced win/loss bet). In order to reduce the risk due to diversification a rather large amount of cash has to be invested to obtain a reasonable risk profile. Again, securitizing those cash flows could create securities with more tailored risk profiles that could be of interest to people that are rather risk averse on the one hand and risk affine gamblers on the other hand.

(c) …

That is the wonderful world of structured finance 😉

Written by Sebastian

December 30, 2009 at 2:35 pm

GLPK 4.41 released

with one comment

A new version of the GNU Linear Programming Kit (GLPK)  has been released – see here for GLPK tutorials. An updated version of the GUSEK windows GUI will follow probably soon. From the release notes:

GLPK 4.41 — Release Information
********************************

Release date: Dec 21, 2009

GLPK (GNU Linear Programming Kit) is intended for solving large-scale
linear programming (LP), mixed integer linear programming (MIP), and
other related problems. It is a set of routines written in ANSI C and
organized as a callable library.

In this release:

The following new API routines were added:

glp_transform_row     transform explicitly specified row
glp_transform_col     transform explicitly specified column
glp_prim_rtest        perform primal ratio test
glp_dual_rtest        perform dual ratio test

For description of these new routines see a new edition of the
reference manual included in the distribution.

The following API routines are deprecated: lpx_transform_row,
lpx_transform_col, lpx_prim_ratio_test, lpx_dual_ratio_test.

Some improvements were made in the MIP solver (glp_intopt).

The SQL table driver used to read/write data in MathProg models
was changed to allow multiple arguments separated by semicolon
in SQL statements. Thanks to Xypron <xypron.glpk@gmx.de>.

Two new options were added to the glpsol stand-alone solver:
–seed value (to initialize the pseudo-random number generator
used in MathProg models with specified value), and
–ini filename (to use a basis previously saved with -w option
as an initial basis on solving similar LP’s).

Two new MathProg example models were included. Thanks to
Nigel Galloway <nigel_galloway@operamail.com> and Noli Sicad
<nsicad@gmail.com> for contribution.

Scripts to build GLPK with Microsoft Visual Studio 2010 for
both 32-bit and 64-bit Windows were included. Thanks to Xypron
<xypron.glpk@gmx.de> for contribution and testing.

See GLPK web page at <http://www.gnu.org/software/glpk/glpk.html>.

GLPK distribution can be ftp’ed from <ftp://ftp.gnu.org/gnu/glpk/> or
from some mirror ftp sites; see <http://www.gnu.org/order/ftp.html>.

UPDATE (01.01.2010): New version of GUSEK is also available:

I’ve updated the Gusek project on SourceForge:
http://gusek.sourceforge.net

Release 0.2.8 changes:
– GLPK updated to 4.41.
– Added Java files support (as in native SciTE).
– Added gnuplot files support (testing – thanks to Noli Sicad).

Gusek provide an open source LP/MILP IDE for Win32,
packing a custom version of the SciTE editor linked to the
GLPK standalone solver (glpsol.exe).

Best Regards (and happy new year!),
Luiz Bettoni

Written by Sebastian

December 22, 2009 at 10:54 am

Posted in Announcements, Software

Heading off to AFBC 2009

with one comment

I am on my way to the 22nd Australasian Finance and Banking Conference 2009 in Sydney. So, what the hell, is a mathematician doing on a finance conference? Well, basically mathematics and in particular optimization and operations research. I am thrilled to see the current developments in economics and finance that take computational aspects, which ultimately limit the amount of rationality that we can get, into account (I wrote about this before here, here, and here). In fact, I am convinced that these aspects will play an important role in the future, especially for structured products. After all, who is going to buy a structure where it is impossible to compute the value? Not even to talk about other complications such as bad data or dangerous model assumptions (such as static volatilities and correlations which are still used today!). Most valuation problems though can be cast as optimization problems and especially the more complex structured products (e.g., mean variance optimizer) do explicitly ask for a solution to an optimization problem in order to be valuated. For the easier structures, Monte Carlo based approaches (or bi-/trinomial trees) are sufficient for pricing. As Arora, Barak, Brunnermeier, and Ge show in their latest paper, for more complex structures (e.g., CDOs) these approaches might fall short capturing the real value of the structures, due to e.g., deliberate tampering.

I am not going to talk about aspect of computational resources though: I will be talking about my paper “Optimal Centralization of Liquidity Management” which is joined work with Christian Schmaltz from the Frankfurt School of Finance and Management. The problem that we are considering is basically a facility location problem: In a large banking network, where and how do you manage liquidity? In a centralized liquidity hub or rather in smaller liquidity centers spread all over the network. Being short on liquidity is a very expensive matter, either one has to borrow money via the interbank market (which is usually dried up or at least tight in tougher economical conditions) or one has to borrow via the central bank. If both is not available, the bank goes into a liquidity default. The important aspect here is that the decision on the location and the amount of liquidity produced, is driven to a large extent by the liquidity demand volatility. In this sense a liquidity center turns into an option on cheap liquidity and in fact, the value of a liquidity center can be actually captured in an option framework. The value of the liquidity center is the price of the exact demand information – the more volatility we have, the higher this price will be and the more we save when we have this information in advance. The derived liquidity center location problem implicitly computes the prices of the options which arise as marginal costs in the optimization model. Here are the slides:

View this document on Scribd

Written by Sebastian

December 13, 2009 at 12:17 pm

Characterizing border bases using polyhedral combinatorics

with one comment

Gábor Braun from the Hungarian Academy of Sciences and I just managed to finish our article “Border bases and order ideals: a polyhedral characterization” which is an extension of our former work that was restricted to degree-compatible order ideals. The article addresses an old problem in computer algebra that Gröbner bases and border bases are usually computed with respect to a term ordering (basically a total ordering on the monomials that is compatible with multiplication). Whereas for Gröbner bases this makes sense, for border bases, relying on a (degree-compatible) term ordering excludes a large number of potential border bases though, as the associated order ideal cannot be obtained by a term ordering.

More precisely, for a given zero-dimensional I \subseteq K[X], a border basis arises from a special decomposition of a polynomial ring K[X] into K[X] = I \oplus \langle \mathcal O \rangle_K (as vector spaces) where \mathcal O is subset of the monomials with the property that whenever m_1 \mid m_2 and m_2 \in \mathcal O then it follows that m_1 \in \mathcal O — a subset with this property is an order ideal. Every order ideal that induces such a decomposition is in one-to-one correspondence with a border basis (as a consequence of the directness of the sum). In case the order ideal and the associated decomposition is induced by a Gröbner basis and its underlying term ordering, we can compute a border basis using the classical border basis algorithm (a specialization of Mourrain’s generic algorithm). We do not have to worry about any combinatorial complications here.

The situation completely changes when we want to compute border bases for arbitrary order ideals. First of all, not every order ideal does actually support a border basis. For a given order ideal that supports a border basis though, one can adapt the classical border basis algorithm to be able to compute the border basis (with respect to this order ideal) as well. Thus the main problem that we are facing is to identify those order ideals that do support a border basis (of a given zero-dimensional ideal). The combinatorial challenge that we are facing then is that we have to ensure two opposite conditions: On the one hand, we need to ensure that our complementary basis forms an order ideal and at the same time we have to ensure that the decomposition is direct, i.e., we obtain constraints on the dimension and linear independence of certain sub vector spaces. This gives rise to a nice combinatorial problem that can be captured using polyhedral combinatorics. In fact, for a given zero-dimensional I one can define a 0/1 polytope, the order ideal polytope, whose integral points are in one-to-one correspondence with all those order ideals that do actually support a border basis. We were also able to show that optimizing over the integral hull of the order ideal polytope is NP-hard and thus it is unlikely that we can obtain a nice characterization of the integral points. This underlines the fact that lots of the combinatorial structure in an ideal is reflected in the factor spaces and the complementary bases.


Written by Sebastian

December 8, 2009 at 5:39 pm

The impact of estimation errors on CDO pricing

with 2 comments

Another interesting, nicely written paper about valuating and pricing CDOs is “The Economics of Structured Finance” from Coval, Jurek, and Stafford which just appeared in the Journal of Economic Perspectives. It nicely complements the paper of Arora, Barak, Brunnermeier, and Ge titled “Computational Complexity and Information Asymmetry in Financial Products” (see also here). The authors argue that already small estimation errors in correlation and probability of default (of the underlying loans) can have devastating effect on the overall performance of a tranche. Whereas the senior tranches remain quite stable in the presence of estimation errors, the overall rating of the junior and mezzanine tranches can be greatly affected. Intuitively this is clear, as the junior and the mezzanine tranches act as a cushion for the senior tranches (and in turn the junior tranches are a protection of the mezzanine tranches). What is not so clear though at first is that this effect is so pronounced, i.e., smallest estimation errors lead to a rapid decline in credit quality of these tranches. In fact, what happens here is that the junior and mezzanine tranches pay the price for the credit enhancement of the senior tranches. And the stability of the latter with respect to estimation errors comes at the expense of highly sensitive junior and mezzanine tranches.

This effects becomes even more severe when considering CDO^2, where the loans of the junior and mezzanine tranches are repackaged again. These structures possess a very high sensitivity to slightest variations or estimation errors in the probability of default or correlation.

In both cases, slight impressions in the estimation can have severe impacts. But also, considering it the other way around, slight changes in the probability of default or the correlation due to changed economic conditions can have devastating effect on the value of the lower prioritized tranches.

So if you are interested in CDOs, credit enhancement, and structured finance you should give it a look.


Written by Sebastian

December 6, 2009 at 4:32 pm