The author is M.V. Ramana, and the paper appeared in Math Programming (1997), Vol 77, p. 129-162.

Title: An exact duality theory for semidefinite programming and its complexity implications

You can check with Pablo Parrilo at MIT, who has worked with the author (M.V. Ramana).

]]>Note first that the P-completeness of LP is irrelevant here. Under polynomial time reductions every problem in P is (trivially) P-complete. LP is P-complete under logspace reductions and that is relevant only if our goal was to show that some problem is unlikely to be in NC^{i} for some constant i.

Leaving that aside, I claim that even if you could prove that the TSP polytope with an objective cut does not have a poly-size LP formulation for any non-trivial objective cut, that would still not prove P \neq NP. It’s a simple quantifier issue:

* P-completeness of LP means: if (a decision version of) TSP is in P, then there exists a polynomial time algorithm M s.t. for every instance (G, k) of TSP, M(G, k) outputs a linear program P which is feasible if and only if the smallest TSP tour of G has value at most k.

So this says that for every yes-instance of the TSP problem we can find some feasible polysize LP formulation. But that is trivial. We could just take the supposed polytime algorithm for TSP and let it output {x: 0<= x <= 1} on yes instances and {x: 0<= x; x <= -1} on no instances. To make it even more trivial, Michael's argument does not even take uniformity into consideration, i.e. the argument does not mention that all LPs need to be generated by the same polytime algorithm.

* A result like yours shows that no *single* LP formulation solves *all* TSP instances. Even if you could prove that adding an objective cut does not lead to the existence of a polysize LP formulation, that would still not be enough: you would be just showing that we cannot solve TSP by starting from a *single* TSP LP formulation and adding objective cuts to it.

In other words, P completeness means "for all instances there exist an equivalent polysize LP formulation" (which is trivial). The negation of this would be "there exists an instance for which we cannot find any equivalent LP formulation" (which is trivially false). A result like yours says "for all LP formulations there exists an instance that is not solved by it" (which is awesome!).

]]>I am trying to solve a double-objective (P-median/P-center) LP problem with CPLEX. The problem is that CPLEX can fill the gap only at 21% and gives an out-of-memory (1001) error message. I am aware that there may be a way to feed the 21% gap-filling solution again into CPLEX to further and finalize the processing. But I don’t know how to that. Can somebody here help on this?

I am also interested in knowing how to optimize the computer memory so that the processing does not use too much of the available memory space which ultimately results in that out-of-memory message displaying and which also interferes with the whole processing stuff.

Thanks in advance.

]]>I’m using glpk, going to post some of my modelling online then I send you the link.

thanks for sharing knowledge!

]]>