Sebastian Pokutta's Blog

Mathematics and related topics

Archive for January 2012

On linear programming formulations for the TSP polytope

with 14 comments

Next week I am planning to give a talk on our recent paper which is joint work with Samuel Fiorini, Serge Massar, Hans Raj Tiwary, and Ronald de Wolf where we consider linear and semidefinite extended formulations and we prove that any linear programming formulation of the traveling salesman polytope has super-polynomial size (independent of P-vs-NP). From the abstract:

We solve a 20-year old problem posed by M. Yannakakis and prove that there exists no polynomial-size linear program (LP) whose associated polytope projects to the traveling salesman polytope, even if the LP is not required to be symmetric. Moreover, we prove that this holds also for the maximum cut problem and the stable set problem. These results follow from a new connection that we make between one-way quantum communication protocols and semidefinite programming reformulations of LPs.

The history of this problem is quite interested. From Gerd Woeginger’s P-versus-NP page (See also Mike Trick’s blog post on Swart’s attempts):

In 1986/87 Ted Swart (University of Guelph) wrote a number of papers (some of them had the title: “P=NP”) that gave linear programming formulations of polynomial size for the Hamiltonian cycle problem. Since linear programming is polynomially solvable and Hamiltonian cycle is NP-hard, Swart deduced that P=NP.

In 1988, Mihalis Yannakakis closed the discussion with his paper “Expressing combinatorial optimization problems by linear programs” (Proceedings of STOC 1988, pp. 223-228). Yannakakis proved that expressing the traveling salesman problem by a symmetric linear program (as in Swart’s approach) requires exponential size. The journal version of this paper has been published in Journal of Computer and System Sciences 43, 1991, pp. 441-466.

In his paper, Yannakakis posed the question whether one can show such a lower bound unconditionally, i.e., without the symmetry assumption and Yannakakis conjectured that symmetry ‘should not help much’. This sounded reasonable however no proof was known. In 2010, to the surprise of many, Kaibel, Pashkovich, and Theis proved that there exist polytopes whose symmetric extension complexity (the number of facets of the polytope) is super-polynomial, whereas there exists asymmetric extended formulations that use only polynomially many inequalities; i.e., symmetry does matter. On top of that, the considered polytopes were closely related to the matching polytope (used by Yannakakis to establish the TSP result) which rekindled the discussion on the (unconditional) extension complexity of the travelling salesman polytope and Kaibel asked whether 0/1-polytopes have extended formulations with a polynomial number of inequalities in general or if there exist 0/1-polytopes that need a super-polynomial number of facets in any extension. Beware! This is not in contradiction or related to the P-vs.-NP question as we only talk about the number of inequalities and not the encoding length of the coefficients. This was settled by Rothvoss in 2011 by a very nice counting argument: there are 0/1-polytopes that need a super-polynomial number of inequalities in any extension.

To make the following slightly more formal, let {P \subseteq {\mathbb R}^n} be a polytope (of some dimension). Then an extended formulation for {P} is another polytope {Q \subseteq {\mathbb R}^\ell} such that there exists a linear projection {\pi} with {\pi(Q) = P}. The size of an extension {Q} is now the number of facets of {Q} and the extension complexity of {P} (denoted by: {\text{xc}(P)}) is the minimum {\ell} such that there exists an extension of size {\ell}. We are interested in {\text{xc}(P)} where {P} is the travelling salesman polytope. Our proof heavily relies on a connection between the extension complexity of a polytope and communication complexity (the basic connection was made by Yannakakis and it was later extended by Faenza, Fiorini, Grappe, and Tiwary and Fiorini, Kaibel, Pashkovich, Theis in 2011). In fact, suppose that we have an inner and outer description of our polytope {P}, say {P = \text{conv}\{v_1, \dots v_n\} = \{x \in {\mathbb R}^n \mid Ax \leq b\}}. Then we can define the slack matrix {S(P)} as {S_{ij} = b_i -A_i v_j}, i.e., the slack of the vertices with respect to the inequalities. The extension complexity of a polytope is now equal to the nonnegative rank of {S} which is essentially equivalent to determining the best protocol to compute the entries of {S} in expectation (Alice gets a row index and Bob a column index). The latter can be bounded from below by the non-deterministic communication complexity and we use a certain matrix {M_{ab} = (1-a^Tb)^2} which has large non-deterministic communication complexity (see de Wolf 2003). This matrix is special as it constitutes the slack for some valid inequalities for the correlation polytope which eventually leads to a exponential lower bound for the extension complexity of the correlation polytope. The latter is affine isomorphic to the cut polytope. Then via a reduction-like mechanism similar lower bounds are established for the stable set polytope ({\text{xc(stableSet)} = 2^{\Omega(n^{1/2})}}) for a certain family of graphs and then we use the fact that the TSP polytope contains any stable set polytope as a face (Yannakakis) for suitably chosen parameters and we obtain {\text{xc(TSP)} = 2^{\Omega(n^{1/4})}}.

Here are the slides:

Written by Sebastian

January 5, 2012 at 3:17 pm

Fundamental principles(?) in mathematics

with 2 comments

As said already in one of my previous posts, David Goldberg and I had a nice discussion about “fundamental concepts” in mathematics. Our definition of “fundamental” was that,

  1. once seen you cannot imagine anymore having not known it beforehand and it completely changes your way of thinking and
  2. a somewhat realistic approach, i.e., when subtracted from the “world of thinking” something is missing.

So here is our preliminary list of things that we came up with – in random order – and a very brief, (totally biased) meta-description of what we mean by these terms. Of course, this list is highly subjective! For each of these “fundamental concepts” the idea is to have (about) 3 applications and try to distill the main core. There will be (probably) a separate blog post for each point on the list.

  1. Identity/Equality. Closely related to being isomorphic. The power of identity is so penetrating that I cannot even find a short explanation. Do I have to? (Aristotle’s first law of thought)
  2. Contradiction. Showing that something cannot be true as it leads to a contradiction or inconsistency. Closely related to this is the principium tertii exclusi or law of the excluded third as this is how we often use proofs by contradiction: a statement holds under various assumptions because its negation leads to a contradiction. If you do not believe in the law of the excluded thirdthen you obtain a different logic/mathematics. In particular, in these logics, usually every proof also constitutes some form of an algorithm as existence by mere contradiction when assuming non-existence is not allowed. (see also Aristotle’s second/third law of thought)
  3. Induction. Establishing a property by relying on the same property for smaller sub-objects.
  4. Recursion. Somewhat dual to induction: a larger object is defined as a function of smaller objects that have been subject to the same construction themselves.
  5. Fixpoint. The existence of a point that is invariant under a map. Equilibria in games.
  6. Symmetry. The notion of symmetry. Take a cube – rotating it does not really change the cube.
  7. Invariants. Think of the dimension of a vector space. Invariants are a powerful way to show that two things are not equal (or isomorphic).
  8. Limits. What would we do without limits? The idea of hypothetically continuing a process infinitely long. Think of the definition of a derivative.
  9. Diagonalization. One of my personal favorites. Constructing a member not being in a family by making sure it differs from all the members in the family at (at least) one position. Diagonalization often exploits self-references. An example is Cantor’s proof.
  10. Double counting. You count a family of objects in two different ways. Then the resulting “amounts” have to be identical. Typical example is the handshaking in graphs.
  11. Proof. The notion of proof is very fundamental. Once proven a statement remains true (provided constistency etc). Interestingly, it can be proven that some things cannot be proven. A good example for the latter is the existence of inaccesible cardinals which is consistent with ZFC.
  12. Randomness. Randomness is an extremely fundamental concept. One of my favorite applications is probably the probabilistic method. Think about Johnson’s {7/8}-approximation algorithm for 3SAT or the PCP theorem.
  13. Algorithm. When considering a function {f: M \rightarrow N} we are often not just interested in what {f} computes but in particular howit can be computed. In this sense the algorithmic paradigm is an additional layer to the somewhat descriptive layer of classical mathematics.
  14. Exponential growth. What we were particularly thinking about was the idea that a relative improvement bounded away from {0} ensures exponentional progress. This is used regularly in different scaling algorithms such as barrier algorithms, potential reduction methods, and certain flow algorithms.
  15. Information. The idea that often a critical amount of informationis necessary to decide a property. Then fooling set like arguments can show that the information is not sufficient. Prime examples include the classical example that sorting via comparison needs at least {\Omega(n \log n)} comparisons, communication complexity, and query complexity.
  16. Function/Relation. Mapping one set to another. In particular important when the function/relation is homogeneous, i.e., when it preserves the structure.
  17. Density and approximation. The idea that a set (such as the reals) can be approximated arbitrarily well by an exponentially smaller set (such as the rationals). This exponential by polynomial approximation is also something that we are using in approximation algorithms, say, when we round the input. In this case the set of polytime solvable (rounded) instances is “dense” in the set of all instances. It can also be found in set theory when using prediction principles (such as Jensen’s diamond principle or Shelah’s Black Box) to predict functions on a stationary set by an exponentially smaller set.
  18. Implicit definitions. The concept of defining something not in an explicit manner but as a solutionto a set of contraints.
  19. Abstraction. The use of variables is so ingrained in us that we cannot even imagine to do serious mathematics without them. But abstraction is much more. It is the ability to see more clearly because we “abstract away” unnecessary details and we use “abstraction” to unify seemingly unrelated things.
  20. Existence (in the sense that Brouwer hated). One of the keywords here is probably non-constructivism and the probabilistc methods and indirect arguments are two promiment methods in this category. This was something that Brouwer despised: the idea to infer, e.g., existence of something merely because the contrary statement would lead to a contradiction (Brouwer’s school of thought denies the tertium non datur). The probabilistic method might have been fine with him. Although that is not clear at all as on a deep level we are merely trading an existential quantifier for a random one… long story…
  21. Duality. By duality we mean the wider idea of duality, i.e., for example the forall quantifier and the existential quantifier. Basically, when we talk about duality we often think about some structure describing the “space of positive statement” and a dual structure that describes the “space of negative statement”. In some sense duality is a form of a compact representation of the negation of a statement.
  22. Counting. Counting is again something that penetrates every mathematical theory. My favorite application of counting is the Pigeonhole principle.
  23. Hume’s principle (suggested by Hanno – see comments). Two quantities are the same if there exists a bijection between them. Somewhat related to “equality” however here we explicitly ask for the existence of a bijection. For example there are as many integers as their are rationals.
  24. Infinity (suggested by Hanno – see comments). The idea that something is not finite. With the notion of infinity I feel that the notion of countably infinite and uncountably infinite is closely connected. In fact the Continuum Hypothesis (CH) is such a case. It is consistent with ZFC and asserts that the first uncountably infinite cardinal is the size of the power set of the natural numbers (essentially the reals), i.e., whether \aleph_1 = 2^{\aleph_0}). However in other models of set theory \aleph_1 \neq 2^{\aleph_0} is possible, by adding, e.g., Cohen reals.

Written by Sebastian

January 2, 2012 at 8:49 pm

Follow

Get every new post delivered to your Inbox.

Join 25 other followers