Sebastian Pokutta's Blog

Mathematics and related topics

Archive for the ‘Talks’ Category

On linear programming formulations for the TSP polytope

with 14 comments

Next week I am planning to give a talk on our recent paper which is joint work with Samuel Fiorini, Serge Massar, Hans Raj Tiwary, and Ronald de Wolf where we consider linear and semidefinite extended formulations and we prove that any linear programming formulation of the traveling salesman polytope has super-polynomial size (independent of P-vs-NP). From the abstract:

We solve a 20-year old problem posed by M. Yannakakis and prove that there exists no polynomial-size linear program (LP) whose associated polytope projects to the traveling salesman polytope, even if the LP is not required to be symmetric. Moreover, we prove that this holds also for the maximum cut problem and the stable set problem. These results follow from a new connection that we make between one-way quantum communication protocols and semidefinite programming reformulations of LPs.

The history of this problem is quite interested. From Gerd Woeginger’s P-versus-NP page (See also Mike Trick’s blog post on Swart’s attempts):

In 1986/87 Ted Swart (University of Guelph) wrote a number of papers (some of them had the title: “P=NP”) that gave linear programming formulations of polynomial size for the Hamiltonian cycle problem. Since linear programming is polynomially solvable and Hamiltonian cycle is NP-hard, Swart deduced that P=NP.

In 1988, Mihalis Yannakakis closed the discussion with his paper “Expressing combinatorial optimization problems by linear programs” (Proceedings of STOC 1988, pp. 223-228). Yannakakis proved that expressing the traveling salesman problem by a symmetric linear program (as in Swart’s approach) requires exponential size. The journal version of this paper has been published in Journal of Computer and System Sciences 43, 1991, pp. 441-466.

In his paper, Yannakakis posed the question whether one can show such a lower bound unconditionally, i.e., without the symmetry assumption and Yannakakis conjectured that symmetry ‘should not help much’. This sounded reasonable however no proof was known. In 2010, to the surprise of many, Kaibel, Pashkovich, and Theis proved that there exist polytopes whose symmetric extension complexity (the number of facets of the polytope) is super-polynomial, whereas there exists asymmetric extended formulations that use only polynomially many inequalities; i.e., symmetry does matter. On top of that, the considered polytopes were closely related to the matching polytope (used by Yannakakis to establish the TSP result) which rekindled the discussion on the (unconditional) extension complexity of the travelling salesman polytope and Kaibel asked whether 0/1-polytopes have extended formulations with a polynomial number of inequalities in general or if there exist 0/1-polytopes that need a super-polynomial number of facets in any extension. Beware! This is not in contradiction or related to the P-vs.-NP question as we only talk about the number of inequalities and not the encoding length of the coefficients. This was settled by Rothvoss in 2011 by a very nice counting argument: there are 0/1-polytopes that need a super-polynomial number of inequalities in any extension.

To make the following slightly more formal, let {P \subseteq {\mathbb R}^n} be a polytope (of some dimension). Then an extended formulation for {P} is another polytope {Q \subseteq {\mathbb R}^\ell} such that there exists a linear projection {\pi} with {\pi(Q) = P}. The size of an extension {Q} is now the number of facets of {Q} and the extension complexity of {P} (denoted by: {\text{xc}(P)}) is the minimum {\ell} such that there exists an extension of size {\ell}. We are interested in {\text{xc}(P)} where {P} is the travelling salesman polytope. Our proof heavily relies on a connection between the extension complexity of a polytope and communication complexity (the basic connection was made by Yannakakis and it was later extended by Faenza, Fiorini, Grappe, and Tiwary and Fiorini, Kaibel, Pashkovich, Theis in 2011). In fact, suppose that we have an inner and outer description of our polytope {P}, say {P = \text{conv}\{v_1, \dots v_n\} = \{x \in {\mathbb R}^n \mid Ax \leq b\}}. Then we can define the slack matrix {S(P)} as {S_{ij} = b_i -A_i v_j}, i.e., the slack of the vertices with respect to the inequalities. The extension complexity of a polytope is now equal to the nonnegative rank of {S} which is essentially equivalent to determining the best protocol to compute the entries of {S} in expectation (Alice gets a row index and Bob a column index). The latter can be bounded from below by the non-deterministic communication complexity and we use a certain matrix {M_{ab} = (1-a^Tb)^2} which has large non-deterministic communication complexity (see de Wolf 2003). This matrix is special as it constitutes the slack for some valid inequalities for the correlation polytope which eventually leads to a exponential lower bound for the extension complexity of the correlation polytope. The latter is affine isomorphic to the cut polytope. Then via a reduction-like mechanism similar lower bounds are established for the stable set polytope ({\text{xc(stableSet)} = 2^{\Omega(n^{1/2})}}) for a certain family of graphs and then we use the fact that the TSP polytope contains any stable set polytope as a face (Yannakakis) for suitably chosen parameters and we obtain {\text{xc(TSP)} = 2^{\Omega(n^{1/4})}}.

Here are the slides:

Written by Sebastian

January 5, 2012 at 3:17 pm

A polyhedral characterization of border bases

leave a comment »

Tomorrow I will give a talk on a recent paper which is joint work with Gábor Braun from the Alfréd Rényi Institute of Mathematics. It deals with the polyhedral characterization of all admissible order ideals (and hence border bases) that exist for a given zero-dimensional ideal. As a hardness result, it also follows that determining an optimal order ideal (w.r.t. to a linear objective function) is NP-hard. This is in particular interesting as it shows that not only computing the L-stable span (the vector space approximation of the ideal) is hard but also choosing a ‘nice’ basis which is a mere basis transformation from a linear algebra point of view (see also my previous post for some more details). Here are the slides:

View this document on Scribd

Written by Sebastian

January 31, 2010 at 7:58 pm

Heading off to AFBC 2009

with one comment

I am on my way to the 22nd Australasian Finance and Banking Conference 2009 in Sydney. So, what the hell, is a mathematician doing on a finance conference? Well, basically mathematics and in particular optimization and operations research. I am thrilled to see the current developments in economics and finance that take computational aspects, which ultimately limit the amount of rationality that we can get, into account (I wrote about this before here, here, and here). In fact, I am convinced that these aspects will play an important role in the future, especially for structured products. After all, who is going to buy a structure where it is impossible to compute the value? Not even to talk about other complications such as bad data or dangerous model assumptions (such as static volatilities and correlations which are still used today!). Most valuation problems though can be cast as optimization problems and especially the more complex structured products (e.g., mean variance optimizer) do explicitly ask for a solution to an optimization problem in order to be valuated. For the easier structures, Monte Carlo based approaches (or bi-/trinomial trees) are sufficient for pricing. As Arora, Barak, Brunnermeier, and Ge show in their latest paper, for more complex structures (e.g., CDOs) these approaches might fall short capturing the real value of the structures, due to e.g., deliberate tampering.

I am not going to talk about aspect of computational resources though: I will be talking about my paper “Optimal Centralization of Liquidity Management” which is joined work with Christian Schmaltz from the Frankfurt School of Finance and Management. The problem that we are considering is basically a facility location problem: In a large banking network, where and how do you manage liquidity? In a centralized liquidity hub or rather in smaller liquidity centers spread all over the network. Being short on liquidity is a very expensive matter, either one has to borrow money via the interbank market (which is usually dried up or at least tight in tougher economical conditions) or one has to borrow via the central bank. If both is not available, the bank goes into a liquidity default. The important aspect here is that the decision on the location and the amount of liquidity produced, is driven to a large extent by the liquidity demand volatility. In this sense a liquidity center turns into an option on cheap liquidity and in fact, the value of a liquidity center can be actually captured in an option framework. The value of the liquidity center is the price of the exact demand information – the more volatility we have, the higher this price will be and the more we save when we have this information in advance. The derived liquidity center location problem implicitly computes the prices of the options which arise as marginal costs in the optimization model. Here are the slides:

View this document on Scribd

Written by Sebastian

December 13, 2009 at 12:17 pm

Update: A Sneak Preview of Wolfram|Alpha

leave a comment »

Today, there will be a sneak preview of the much hyped Wolfram|Alpha search engine (see my preview post) at Harvard’s Berkman Center. The event will be broadcasted as live webcast at 3pm ET.

Tuesday, April 28, 3:00 pm
Austin East Classroom, Austin Hall, Harvard Law School
This event will be webcast live at 3:00 pm ET.
We have reached capacity for this event. You can RSVP to if you would like to attend the event in person but be seated in our overflow space.

Questions can be posted (live), on Twitter to BerkmanCenter (direct messaging or @replying) or via the live question tool.

References/see also:

  1. [Today] A Sneak Preview of Wolfram|Alpha: Computational Knowledge Engine | Berkman Center.
  2. Wolfram Alpha Getting A Public Preview On Tuesday

UPDATE 29.04.2009: For those who missed it or had problems streaming it (which seems to have been quite a problem), the talk is now also available online for download or streaming-on-demand.

UPDATE 06.05.2009: And here, a video that shows the actual results of the search engine.

Written by Sebastian

April 28, 2009 at 2:23 pm

Going to Zaragoza

leave a comment »

I am going to Zaragoza, Spain to talk about “Computational methods in Production Planning and Logistics” at the MIT-ZARAGOZA Speaker Series this Wednesday (Jan, 28th).


The impact of modern computational methods from operations research, analytics, and computer algebra when solving supply chain management, logistics, and planning problems has been enormous. Many supply chains could hardly be operated without the support of sophisticated, computerized optimization and planning engines nowadays.

I would like to present two special production and planning problems which have been successfully solved with computational methods. The first one is a stowage optimization problem for inland vessels which has been tackled with integer programming. The challenge for the transportation of inland vessels on auxiliary routes comprises low bridge heights and reduced water depths. In order to ensure save passage of bridges of critical heights load masters are forced to load the vessels with a reduced number of containers in order to reduce the ship’s maximal height over waterline which effectively reduces the ship capacity. We present a solution to calculate optimal stowage plan that restores the ship’s full capacity. The second problem is a production problem from the oil industry that has been addressed with methods from computer algebra. The available data that describes the oil production process is noisy and of low quality. Nonetheless, understanding the subsurface fluid dynamics that guide the production process is essential for optimizing production and keeping the field in a healthy state. The presented solution concept starts solely from noisy measured data and returns a (polynomial) model that approximates the production process. The derived models can be used for monitoring and control purposes in a second step.

Written by Sebastian

January 24, 2009 at 3:46 pm

Posted in Talks

Tagged with