### Depth and Violation

14Aug11

Over the last year or so I became interested in a family of problems related to linear programming. In ordinary linear programming, we are given some collection of linear constraints, say {ai x ≥ bi} for i=1, …, n, and the x represents a d-dimensional variable. A central result in optimization says that there is an efficient algorithm to determine whether any x exists which simultaneously satisfies all of the constraints. However, if not (i.e. if the program is not feasible), it is interesting to ask: what is the maximum number of constraints that we can satisfy? Or equivalently, what is the minimum number of constraints that need to be deleted, in order for the program to become feasible?

A half-plane

Caveat: these are rough notes, mostly written up for my own benefit, so they may have some bugs, reader beware! (Note to future self: this includes you.)

It is natural to view the problem in a geometric way: each {x | ai x ≥ bi} is a half-space and we are seeking a point which lies in the maximum number of the half-spaces. In geometry you often call the number of half-spaces containing a given x the depth of x so we are equivalently looking for a point x of maximum depth. However, the complementary sets {x | ai x < bi} are open half-spaces so this is the same problem as looking for a point of minimum depth (glossing over the difference between openness and closedness).

In any fixed dimension, these sort of problems can be solved exactly in polynomial time, but they become NP-complete if the dimension is part of the input. Moreover, the approximability of the minimization and maximization versions become different. Below, I sum up what is known for the different versions. Moreover, I also consider the closely related problem of equality systems where each equation looks like {ai x = bi}; in this case geometrically the half-spaces are replaced by hyperplanes.

Maximum halfspace depth. There is no n1-ε approximation [Feige and Reichman], but there is a n/log n approximation by Halldórsson.

Minimum halfspace depth. There is no 2log^(1-ε) n approximation [Arora et al]. There seems to be no nontrivial approximation known in terms of n; but a (d+1)-approximation can be obtained using Helly’s Theorem applied to halfspaces. (There is an algorithm with time polynomial in n and OPTd, e.g. this is poly-time when d = log n and OPT is constant.)

Maximum hyperplane depth. There is no PTAS, but there is a 2-approximation. [Amaldi annd Kann]

Minimum hyperplane depth. Trivially, the answer is 0.

Minimum depth, ≠ constraints (same as minimum deletion to make equality system feasible, or “min unsatisfy”). There is no 2log^(1-ε) n approximation [Arora et al] but there is an εn/log n approximation [Berman and Karpinski].

Maximum depth, ≠ constraints. Trivial again.

You could consider weighted versions where you there is a certain dollar value for satisfying/deleting a given constraint, with the original problems corresponding to all values equalling 1. I would say that “typically” such problems resemble the original versions if all the weights are nonnegative. However, what is the complexity of minimum hyperplane depth if you have weights that are a mix of positive and negative numbers?

In the negative feedback arc set problem, we are given a directed graph with positive and negative arc lengths, and want to delete a minimum-size subset of arcs to destroy all negative-weight cycles. This can be reduced to maximum halfspace depth by using node potentials. Negative feedback arc set automatically inherits constant inapproximability; I don’t think anything else is known about this interesting problem.

FYI: The utility of the Arora et al. paper was observed by Amaldi and Kann in papers from 1995 and1998, which are useful references, although now dated. A recent paper of Gottlieb and Neylon that connects min unsatisfy to other matrix sparsity problems gives 2log^(1/2-ε) n as the resulting hardness but I haven’t checked which is right. The recent paper of Elbassioni et al. has useful references too.