Roxy Paine’s sculpture “Defunct” (photo, Nicole Marti)
There were a couple of seminars today and one mentioned a result giving the following interesting corollary. Suppose we have a graph , and with each edge e we associate three numbers which we interpret in the following way: to construct edge e we need to use kilograms of aluminum, kilograms of beryllium, and kilograms of chromium. Later on we’ll decide to construct a spanning tree of the graph, which we would like to do cheaply; however the market price of the three metals affects which spanning tree is cheap. Nonetheless, it’s possible to fix a relatively small set of spanning trees ahead of time, and buy just one:
For k types of metals, there is a set S of spanning trees, so that for any assignment of prices to the metals, one of the spanning trees in S has minimal price among all spanning trees.
The theorem is interesting since the total number of spanning trees is exponential in . Is the theorem tight? I do not know.
The theorem follows from a more general result in the seminar (which was presented by Marco Di Summa, another member of my group at EPFL).
Take a polyhedron, and let D be the set of all directions of its edges. Then any projection of the polyhedron into k dimensions has at most vertices.
We combine this with the fact that the spanning tree polytope has at most directions (a similar fact holds for any matroid).
This week I have been at EPFL, Lausanne, Switzerland. It has been a good opportunity to practice my French; there is not as much English spoken here as in the more touristic parts of the country. Nonetheless this is a good place to come if you are a tourist: it is nearby the beautiful lake Geneva, home to lots of awesome castles and old buildings, and good food and wine is plentiful. For some reason a main dish I see on a lot of menus is cold roast beef with tartar sauce, usually $25 (Swiss francs ~ Canadian dollars) — as of yet I don’t understand this.
In news that is not directly related, I was recently reading papers on a technique called disjunctive programming going back to the 1970s. The main tool can be interpreted as a very cute result having to do with the fundamentals of LPs that I will depict below. It has to do with taking the convex union of two polyhedra. It’s very simple but seems relatively unknown (or at least, I have seen it buried deep in papers but never in courses where it would fit well).
A linear program in two dimensions can be thought of as a polygon (dots connected by straight lines), pictured as “P1″ at the left. There are several “linear constraints” which are all of the format
“2.1 * x + 1.8 * y ≥ 6.7″
and this particular constraint is the line pictured with the vector (arrowhead)
coming out of it. This constraint says that only points to the right of that line are feasible for this constraint; P1 here has 4 other constraints corresponding to its four other sides, and the feasible region is the pentagon. At the right is a second polyhedron which I will call P2, which is similarly defined by four linear constraints.
Now can we get a collection of constraints which expresses the intersection of these two polyhedra? In this case it is easy: simply take all the constraints of P1, and throw in all the constraints of P2, then the feasible region of the resulting system is exactly the intersection of P1 and P2. The intersection polyhedron is pictured in black on the left-hand side. (In this example, there are 9 constraints, but the intersection polyhedron has only 7 sides, since two of the these constraints become redundant.)
Whenever you do something with an intersection in mathematics, it is natural to ask if you can do the same thing with a union. That is to say, can we get a linear program to express the set of all points which lie in either P1 or P2 or both? Immediately we can see there is a problem with this, since linear programs can only express convex regions, and the union may not be convex. But it turns out there is an easy way to express the convex hull of the union of P1 and P2, i.e. the smallest convex set containing both P1 and P2 (pictured on the right-hand side).
To fully explain, we will need some symbols. Let z be a vector containing all the variables of the linear program, suppose P1 is represented by the linear system A1z ≥ b1 and P2 is represented by the linear system A2z ≥ b2. Essentially by definition, the convex hull of the two convex shapes P1 and P2 is equal to
The problem is that this program in variables is not linear, e.g. since we are multiplying with . But the slightly tricky & nice part is that if you rescale the variables, you get the following linear formulation for the same set:
Good times with linear programs! This type of method leads to something called “lift-and-project” in the literature which amounts to automatically strengthening a linear program in such a way that is becomes “closer” to having no fractional vertices.
Exhibit A: Maple, a sapling of the University of Waterloo. It is a computer algebra system allowing you to perform both symbolic and numerical computation. It has lots of built-in “packages” including one called GraphTheory, and other people can design add-on packages as well, such as convex (for polyhedral combinatorics, related to linear programming).
Exhibit B: Geogebra, a graphical dynamic algebra/geometry program. It hearkens back to the days of Geometer’s Sketchpad and Cabri Geometry. The idea is that “compass and straightedge”-like constructions are automated and made dynamic. Example: to construct an equilateral triangle on base AB, use the circle tool to draw a circle centred at A through B, and one centred at B through A, then their intersection C makes ABC equilateral. Then drag A or B, and watch C magically update as you go. Geogebra is even more awesome because it is free, and because it can create high-quality graphics suitable for publication.
I was recently using Maple to do some polyhedral combinatorics on graphs; but Maple is pretty bad at making nice pictures of graphs. Borne of this need, I wrote a Maple procedure which exports to Geogebra (which is in turn an .xml file that has been .zip compressed) – vertices are points, and edges are line segments. I’m posting my sketchy code below in case it is somehow useful to a future searcher (thanks google!) but first, the pretty pictures resulting from these hijinx!
Buzzwords to describe these pictures: extreme point solutions for the parsimonious undirected cut relaxation of the spanning tree polytope. These solutions (pictured on 10 and 30 vertices) are special because they have high (support) degree and high denominator (the |V|/2th fibonacci number).
Code follows the jump
After figuring out why the equation in the last post described a tetrahedron, I was able to apply the same technique to generate the four other Platonic solids. By popular demand, here they are!
The rounded cube has equation
and the rounded octahedron is
I won’t post the icosahedron and dodecahedron’s formulas because they are pretty big and ugly (they involve the square root of 5, and respectively have degrees 12 and 9).
Bill Cook gave a talk at Waterloo about some of his work on the Traveling Salesman Problem (TSP). He has worked in the area for something like two decades. Here are some notes and highlights of his talk.
One of his main research interests is the solution of TSPs to optimality. It’s worth pointing out that many traditional applications are interested in getting cheap solutions, which many approximation algorithms will give you; so then why should we bother to solve for the cheapest solution (which is potentially much computationally harder, since TSP is NP-complete)? He gave one recent example of radiation hybrid genome sequencing where the exact optimum is needed, and his work has been used to improve clinical sequencing operations.
Recently (with some colleagues) he was able to solve the largest unsolved TSPLIB instance to optimality; it had about half a million cities. It took many computer-years of time; for this reason he stressed the importance of having a method to independently verify the optimality of the computation’s result. However, by cleverly extracting the appropriate “certificates,” the optimality of the best solution can be verified in just about 12 hours of computer time.
He also pointed out that the CPLEX “default optimality tolerance” is about 0.0001%, and that if a user were unaware of this, they could easily find an “optimal” solution to an IP which is actually non-optimal. Along these lines, he described some details of how he is working on an exact-arithmetic LP solver for use in his work. There was some mention that exact linear solving of sparse systems (e.g., Wiedermann’s method) would be useful in part of his work.
- proving lower/upper bounds on quantities of the form ±sqrt(k_i) seems to be hard, affecting exact solvability of Euclidean instances
- he’s working on a “Star TSP” with half a billion cities, and some “geometric decomposition” techniques
- duality on near-planar graphs has helped him to solve some large instances
- apparently “256-way tentative branching” is much more sweet than “strong branching.”