About a month ago, the Facebook Hacker Cup included a nice programming problem involving a certain kind of matching. The min-cost bipartite matching problem can be stated like this:

  • You have a number P of people, and a number Q ≤ P of jobs. Each person is willing to do any of the jobs, but each person will charge a different price for each job. You are only allowed to hire each person to do at most one job, and you need every job to get done. What is the cheapest way to hire employees for all the jobs?

Just as a quick concrete example, let’s say we have P=3 people (Alice, Bob, Carl) and Q=2 jobs (Vacuuming, Window-washing).

  • Alice charges $50 to do vacuuming and $40 to do window-washing.
  • Bob charges $30 to do vacuuming and $20 to do window-washing.
  • Carl charges $60 to do vacuuming and $30 to do window-washing.

In this case, the minimum cost to get all the jobs done is $60: pay Bob $30 for vacuuming, and pay Carl $30 for window-washing.

I had known about one way to solve this sort of problem, but I later learned another approach called The Hungarian Algorithm. It’s a very efficient approach, taking O(Q²P) time. In this post I’ll explain how it works, and in the next post, I will explain how it generalizes to an algorithm to the min-cost max-flow problem, specifically the capacity-scaling successive shortest paths algorithm.


There are some preliminaries that I am going to build on here, which are generally covered in any introduction to flows:

  • bipartite matching can be thought of as a flow problem, by attaching a source node s to all people, directing edges from people to jobs, and attaching all jobs to a sink node t
  • given some flow in a graph, the residual graph represents the unused capacity of the edges, together with added reverse edges that represent the cost that can be saved by decreasing the flow on that edge
    • in the min-cost flow setting, a reverse edge’s cost is the negative of the original edge’s cost (i.e., cost savings)
  • an augmenting path is a path in the residual graph (using edges with nonzero capacity) from the source s to the sink t

The simpler max-flow problem, where we don’t care about costs, can be solved like this:

  • As long as the residual graph has an augmenting path from s to t, increase the flow on any augmenting path.

There are several ways to prove this algorithm’s correctness. Here is one way, which will be relevant later on:

Let F be some non-maximum flow, and X be any maximum flow. View the difference between X and F as a flow in F’s residual graph. By the flow decomposition theorem, this difference can be written as a nonempty list of augmenting st paths plus a possibly empty list of cycles. Therefore, there is an augmenting st path, and we can keep improving F.

Let N be the number of nodes, and M be the number of edges. For unit capacities, this simple max-flow approach takes at most N iterations of a path-finding algorithm, which itself takes M steps, and so the total time complexity is O(MN).

Moreover, this simple algorithm works quite nicely when we add capacities to the edges (but still ignoring costs):

  • Taking the shortest path each time yields a maximum flow in O(MN) iterations;
  • taking the fattest path (highest residual capacity) yields a maximum flow in O(M log K) iterations, where K is the maximum capacity.

Greed and Costs

greedy algorithm is usually one that just makes the cheapest decision at every point in time, without worrying about troubles that can come down the road from not planning ahead. Here’s the simplest greedy algorithm you could imagine for our matching problem now that costs are involved:

  • Repeatedly do the following: out of all the people you haven’t hired and jobs you haven’t filled, pay the cheapest price you can to get one more job done.

However, this algorithm is not correct. In our example above, we’d first pay Bob $20 to do window-washing, since that’s the cheapest of all prices. After that, the cheapest price that doesn’t involve Bob or window-washing is to pay Alice $50 to vacuum. This total cost of $70 is not optimal.

Nonetheless, even once you add costs into the picture, there still remains a very elegant greedy algorithm that successfully solves min-cost flow:

  • As long as the residual graph has an augmenting path from s to t, increase the flow on any minimum-cost augmenting path.

So in our example, the second iteration would use the augmenting path [pay $30 for vacuuming by Bob, refund $20 from Bob for not window-washing, pay $30 for window-washing by Carl] which has cost $30-$20+$30 = $40, less than the direct path [pay $50 for vacuuming by Alice].

I was surprised the first time that I read this algorithm is correct! But, it has a very nice inductive proof. In the proof, the value of a flow is the amount of flow it sends from the source to the sink (so max-flow is really the problem of finding a flow with maximum value).

For simplicity, assume all capacities are unit. Let F be a minimum-cost value-k flow. Let X be any minimum-cost value-(k+1) flow. View the difference between X and F as a flow in F’s residual graph. By the flow decomposition theorem, this difference can be written as a single augmenting st path P plus a possibly empty list of cycles.

None of the cycles can have negative cost, because they could have been used to augment F into a cheaper value-k flow. Therefore, augmenting F by P gives a value-(k+1) flow whose cost is at most X’s cost, and which is therefore a minimum-cost value-(k+1) flow.

Let’s assume unit capacities for now (which is true in the application to min-cost bipartite matching). This algorithm needs O(N) iterations. How can we find the minimum-cost augmenting paths? The residual graph has negative-cost reverse edges, but crucially, the above proof shows that it never has a negative-cost cycle. So the Floyd-Warshall algorithm in O(N³) time, or the Bellman-Ford algorithm in O(NM) time, can be used in each iteration. This gives a total time complexity like O(N²M).

We can do better!

Using Your Full Potential

The Hungarian Algorithm, when viewed in the context of flows, lets us improve the above algorithm simply by switching to a better algorithm for shortest paths. Namely, Dijkstra’s algorithm. The only problem is those pesky negative edges; Dijkstra’s algorithm won’t work with those!

There is, however, a way to transform some graphs to avoid negative edges. The transformation is to use a potential function, which I’ll denote by y, since it’s like a dual solution for the associated linear program.

  • A potential y is a vector that assigns a real value to every node.
  • valid potential is one that satisfies, for every edge uv, y(v) ≤ cost(uv) + y(u).
  • The reduced cost of edge uv is defined as reduced_cost(uv) := cost(uv) + y(u) – y(v);
    • so a potential is valid if all reduced costs are nonnegative.

Potentials are a way of transforming a graph in a superficial way:

  • Let the reduced distance from u to v be defined as the shortest path distance from u to v, along the reduced costs.
  • Then you can prove that the reduced distance from u to v doesn’t depend on the path:
    • the reduced distance equals the normal distance from u to v, plus y(u), minus y(v).

Conversely, if we compute the reduced distances, we can easily recover the real distances. So as long as the algorithm can maintain a valid potential, the reduced costs are nonnegative, and it can use Dijkstra’s algorithm to find the shortest st path.

How can we maintain a valid potential? It turns out that not only are potentials useful for finding shortest paths, but also shortest paths are useful for computing potentials.

  • In a graph with no negative cycles, define y(v) for every node v to be the shortest-path distance from s to v. Then this y is a valid potential.

This is useful algorithmically because it lets us leapfrom from a valid potential, to shortest paths, to the augmenting path needed to augment the min-cost flow, to the next potential. Expressed this way, the Hungarian algorithm looks like this:

  1. Let y be the all-zero vector.
  2. Using y as a valid potential, find a shortest st path P, stopping if none exists.
  3. Redefine y to be the shortest path distances from s.
  4. Push a unit of flow along P and update the residual graph.
  5. Go back to step 2.

To prove that this actually works, we need to show that when we update the residual graph, y remains a valid potential.

This is because the edges uv on any shortest path from s satisfy satisfy d(s, u) + cost(uv) = d(s, v), which is a form of complementary slackness.So adding the reverse edge vu with cost(vu) = -cost(uv) will not violate the validity constraint y(u) ≤ cost(vu) + y(v) for the potential, as it holds with equality.

That’s it!


Some explanations of the Hungarian algorithm are slightly different in step 3, rephrasing it as the equivalent “Increase y(v) by the reduced distance from s to v, for each node v.”

Historically, the algorithm is based off of ideas of Kőnig and Egerváry, with Kuhn in 1955 being the first to consider a polynomial-time implementation. This algorithm is the oldest one that I know of where you get a substantial benefit from thinking both in the primal (the flow) and dual (the potential) at the same time. Similar ideas were used in 1965 by Edmonds to derive an algorithm for min-cost non-bipartite matching; more on that to come in the following post.

The algorithm above will work eventually for graphs with arbitrary edge capacities; but if the maximum capacity is K, the number of iterations might be something like K or larger. In the next post we’ll get that back down to polynomial-time by giving an algorithm that takes O(M log K) iterations.

I wanted to post a short update on CS Circles, my project with the University of Waterloo CEMC to teach introductory programming online.

I’ve been checking the total number of problems solved on the site, and it has just surpassed 750000! I’m pretty sure I’ve never done 750000 of anything in my life except maybe eating grains of rice. More than 20000 users have solved the first few problems, while for the hardest problems, only slightly more than 500 students have solved them.

One of the most interesting things about CS Circles is how the “ask for help” feature has worked in practice. This feature is available on every programming problem, and allows students who are stuck to send in the current version of their code alongside a few sentences explaining where they are stuck. They can ask for help from their own teacher if they are registered on the site, otherwise they can ask the “CS Circles Assistant” which is me and my colleagues Sandy and Troy. The interesting thing is that more than half of the questions I get are already solved before I log in! I take the weekend shift so the questions I see have been sitting there about 24 hours. I remember using the Piazza system at university where we strived to answer every question within hours! I guess we should just be more lazy and most people will figure it out sooner or later, after they’ve taken the time to verbalize exactly what is the problem.

The other update is about the source code. CS Circles became open-source back in 2013, thanks to a SIGCSE special projects grant. Over the last couple of years, a team of educators in Serbia extended the project in two ways: first, they created a Serbian translation; second, they extended the underlying infrastructure to add new functionality. This includes look & feel improvements, an involved improvement to run multiple courses at once (in their case, multiple difficulty levels), and the ability to run time-limited and feedbackless quizzes. It was great just to know that the project was useful to them, and we also were able to publish a report on the changes and how the students liked it.

Are you interested in learning either Serbian or Python? Then you can check it out at http://imi.pmf.kg.ac.rs/imipython/

This holiday, I went back to Toronto to hang out with family and friends. It was a great vacation! But I also had to work for a couple of days. I took this occasion to check out the different Google locations in Toronto and Waterloo and work from each for one day.


The Waterloo office is very new; the people working there moved in November from another location across the street. PM Justin Trudeau visited it last week for the “official opening,” to much fanfare! It’s a fairly large office, with several hundred engineers and an assortment of other Googlers. I took a couple of pictures of things that I liked there. The above picture was near one of the microkitchens, it reminded me of hipstery cafes, although pretty deserted this time of year.

The second picture, below, I can only assume is a time machine. A pretty good way to get the scoop on the competition!IMG_20151229_124052-001The Toronto location is located right downtown. It is quite a bit smaller and consists almost entirely of sales and business staff. They are the first place I’ve ever visited that had outdoor minigolf on a balcony:


Just for the record, that’s a mini-golf hole consisting of a replica of the Prince’s Gate (of CNE fame), and if you squint hard, you can see a scuplture of a polar bear in the background. Moving inside, I found this in a microkitchen:


The Tim Hortons loses some authenticity points since it’s only called “Cafe & Bake Shop” in the US as far as I know. However, I have to give them credit for the box of maple sap! Finding that for sale anywhere is a new trend to me, but it was pretty delicious!


I just finished reading “A Mathematician’s Apology,” written by the number theorist G. H. Hardy in 1940. I sort of assumed I had already read it when I was younger, but this turns out to be completely false.

How do I know that I’ve never read it? Before opening it this time, I thought it was a book focusing on his relationship with the prodigy Ramanujan. (Was Hardy the one who gave him tuberculosis?) Nope, Ramanujan is barely mentioned at all.

Instead, the book is Hardy’s defense of mathematics. In brief, he laments that the kind of advanced mathematics he studied was not useful; and in the book he defends his life choice to spend all his professional time studying it.

Another major theme is that Hardy was past what he considered to be the mathematical prime of his life: he was age 62, although he would go on to live for another 7 years. He famously wrote,

No mathematician should ever allow himself to forget that mathematics, more than any other art or science, is a young man’s game.

This, of course, is questionable, despite the examples Hardy gives in the book. Consider, for instance, László Babai’s apparent discovery this year of a quasi-polynomial-time algorithm for Graph Isomorphism, at the distinguished age of 65.

Beyond simply talking about perspectives on age, the book is very wistful and has a somewhat depressed perspective. I’ll admit people in inter-war Europe had reason to be troubled; the armed forces seemed to be so much more important than peaceful intellectual pursuits. It is too bad, though, that Hardy did not have the benefit of hindsight.

He wrote,

The general conclusion, surely, stands out plainly enough. If useful knowledge is, as we agreed provisionally to say, knowledge which is likely, now or in the comparatively near future, to contribute to the material comfort of mankind, so that mere intellectual satisfaction is irrelevant, then the great bulk of higher mathematics is useless. Modern geometry and algebra, the theory of numbers, the theory of aggregates and functions, relativity, quantum mechanics — no one of them stands the test much better than another, and there is no real mathematician whose life can be justified on this ground. If this be the test, then Abel, Riemann, and Poincare wasted their lives; their contribution to human comfort was negligible, and the world would have been as happy a place without them.

This is pretty far from how things have turned out. Quantum mechanics and relativity have made possible things like transistors and GPS satellites. The theory of numbers and algebra led way to encryption which has had far-reaching effects on our daily life. Hardy actually admitted as much just a few pages earlier:

Time may change all this. No one foresaw the applications of matrices and groups and other purely mathematical theories to modern physics, and it may be that some of the ‘highbrow’ applied mathematics will become ‘useful’ in as unexpected a way; but the evidence so far points to the conclusion that, in one subject as in the other, it is what is commonplace and dull that counts for practical life.

So I don’t know if his conclusions of “everything is useless” is from a British sense of self-deprecation, or just griping from his yearning for youth.

I found the contrast even starker here:

Real mathematics has no effects on war. No one has yet discovered any warlike purpose to be served by the theory of numbers or relativity, and it seems very unlikely that anyone will do so for many years.

In a delicious twist of fate, Hardy’s job at Oxford put him just about 50 kilometers from Bletchley Park, where his compatriots such as Alan Turing and Bill Tutte were using combinatorics to reveal the secrets of the Nazis in World War 2. Admittedly, what they did was even beneath the “theory of numbers;” they used group theory, heuristics, and improvements to brute-force methods along with computer engineering. Even so, Hardy specifically disparaged pursuits of this ilk, calling out newspaper puzzles and chess problems as just having a mathematical “kick.”

All said, I still liked the book. Hardy talks at length about the aesthetics of mathematics and these discussions really struck a chord with me: once you’ve proven a mathematical statement, it stands true forever.

Now, I need to go (re-)learn about Gaussian integers in order to understand Hardy’s comments about the 2-squares theorem!


A couple of years ago I got a subscription to Cook’s Illustrated, a fabulous food magazine. It has a few distinctive features:

  • IMG_1873no ads! this hugely increases my happiness in reading the magazine cover-to-cover
  • the recipes are written like science experiments
  • the interiors are all black-and-white, with some photos replaced by schematics or pencil-sketch-style drawings (see right)

I’ve added quite a few delicious items from the magazine to my regular rotation of dishes, including carne deshebrada and wheat berry pancakes. This month’s issue had something entirely new for me.

The recipe is for a vegetarian burger. I’ve had veggie burgers before, but they always had the look, taste and texture of hockey pucks. To the contrary, these ones are pretty satisfying on all counts. Especially if you like your burger loaded with fixin’s, you can barely notice the difference in switching from beef burgers to these beany bad boys.

The patties

The patties

Here’s the outline of the recipe. I imagine you can use whatever herbs and spices are your favourite and it will turn out great no matter what.

  1. Drain two 15-ounce cans of black beans. Let sit on paper towels to dry thoroughly.
  2. Weigh one ounce of tortilla chips by into a food processor. Pulverize.
  3. Add the beans to the food processor and process until they have the texture of ground beef (approx 5 pulses).
  4. Beat two eggs. Incorporate 2 tbsp flour; this will be the glue to keep the burgers together. Add herbs (4 green onions and a half-bunch of chopped cilantro leaves) and spices (1 tsp cumin, 1/2 tsp coriander, 1/4 tsp each salt and pepper, 1 tsp sriracha, 2 minced garlic cloves). Stir.
  5. Combine “glue” with bean-chips mixture and stir with a spoon until uniform.
  6. Important! Let the mixture rest, covered, in the fridge. Rest for at least 1 hour and at most 1 day.
  7. Form about 8 patties (or 12 to 16 sliders). Fry in a generous amount of vegetable oil at medium-high heat for 5 minutes per side.

We still have a couple left (pictured above). Just to make sure I don’t forget how to be a carnivore, I’ll be eating one topped with bacon slices for dinner.



Better late than never, I wanted to post some memories from the trip we took to Mexico this summer for my brother’s wedding. Disclaimer: due to time zone changes, cerveza, or other factors, my memories of which thing happened on each day may have shifted.

Day 0: We got rebooked and had a slight delay but arrived in the afternoon to meet my family at the resort. It was a beautiful place. I played gigantic human-sized chess with my sister, who whomped me. At least there were free drinks to drown my sorrows. We went to the French restaurant on-site and had a delicious 3-course dinner.


The newlyweds’ patio

Day 1: Had a great night’s sleep (no dream of being attacked by giant knights and bishops). We rendezvoused with siblings, siblings-in-law, and brother’s other friends at the various beaches and bars. We took taxis to Playa Del Carmen for a special dinner arranged by the newlyweds-to-be. Before returning home we checked out what I presume is one of the world’s largest souvenir stores, but managed to escape without purchasing anything.


Central square in Cancún

Day 2: Ate a delicious breakfast (as usual); there were omelette stations and bottomless mimosas but I think on this day I composed some huevos rancheros. We started checking out the attractions off-site. Sara and I went to a particularly cool cenote named Río Secreto… it reminded me of past spelunking adventures but was much more chilled out. We skipped paying $100+ for the entire set of pictures but snapped a blurry bootleg of one image. We went to Cancún city afterwards; we got a little lost but then discovered some random outdoor concert/festival with lots of locals. It was a great adventure together, all in all. I tried out the churros. Verdict: they were delicious!


Brad, Isaac and me

Day 3: My brother Brad’s wedding. Got my fancy clothes on with him and then we headed out to the bar for his last pretzel as a free man. The other groomsman Isaac was there to party in fine style (pictured: the fanciest bathroom I’ve ever been in). The wedding was on a beautiful beach, with beautiful people, and beautiful food. Brad and Claudia gave very heartfelt speeches and it was my honour to spend that day celebrating with them. There was an extremely delicious cocktail made with special Pisco (a national beverage for the bride). Then as it got dark we saw just how good the resort was at wedding planning 🙂 From what I recall we danced with pool noodles, glowing glasses, balloons, noise makers, and managed to stay until the closing of not one but two bars on the resort.


Brad, Claudia and the Atlantic ocean!

Day 4: Recovery. We played a lot of volleyball in the pool. Delicious mexican restaurant on the resort for dinner!


Chichén Itzá’s one unrestored face

Day 5: Adventure. We rented two cars and drove out to Chichén Itzá. It was a very hot day but we had a good tour guide to keep us motivated. The driving was extremely peaceful as the highways were pretty empty. We learned that there were special acoustic effects built into the pyramids’ architecture, and that a lot of people “lost their heads” over competitive sports back in Aztec days. After escaping to some frosty beverages, we headed to Cenote Ik Kil; this cenote was a swimming hole set about 20 metres underground, with some fish patrolling the waters underneath us, and huge diving platforms overhead. Finally we headed to a delicious dinner of local Yucatan cuisine at Las Campanas, checking out Valladolid’s picturesque church before returning home.


Valladolid church

Day 6: I went on a whale-shark expedition with my cousins’ significant others, Matt and Taka. (Sara has a fear of neckless animals so she decided to pass.) I managed to put on my contacts while on deck a boat speeding across the open ocean, which was exhilarating. We chased those whale-sharks (they are vegetarian sharks the size of whales) for a good couple of hours, stopping 4 or 5 times to get in the water with them. Each time we dived in we were disoriented, not just from being besnorkeled and finned, but also from the sheer scale of the creatures. They normally carried in tow an entourage of feeder fish to their sides, above and below. They were as large as a bus and moved just as fast but it looked effortless and so it was a strange feeling trying to chase them while swimming. It was an unforgettable experience! They served some ceviche with hot habanero peppers on-board, afterwards, and we did some more low-key snorkeling near a beautiful reef. We had one more dinner back on the resort and my family kindly endulged us to show them pictures of our trip to Iran, for which we repaid them in Persian sweets.


I think this is our best direct evidence of being with the whale sharks

It was an amazing trip, both relaxing and full of unique celebrations. Congratulations again, Brad and Claudia! And thanks for bringing us all together on that awesome trip.



Next week I am headed to the ITiCSE conference (Innovation and Technology in Computer Science Education), in Vilnius, Lithuania, to present Websheets, an exercise system for programming problems in C++ and Java.

Websheets combines a couple of ideas.

  • It is an online system where students get immediate feedback on their programming solutions. This has become very common in recent years, though not many support C++, and not many are open-source (the biggest exception being the awesome CloudCoder system).
  • An exercise isn’t just a big blank page, rather it’s a template with specific gaps to be filled in. The motivation for this is a system of hard-copy handouts used at Princeton… the idea being that, if you are just teaching one topic that day, and you have limited class time, it would be better to give the boilerplate and off-topic bits of the program to the students ready-made, so they can make better use of their limited in-class time.

Try it out! Go here and read all about it:


If you create any cool exercises to share, or find ways to break it horribly, I’d love to know.

In the last couple of weeks I’ve done a few fun things to it:

  • I combined the Java version from Princeton and the C++ version from USC.
  • Taking inspiration from CloudCoder, you can now create and edit exercises online.
  • Teachers can now keep track of their students, like we did last term at USC. (We hosted some very low-value homeworks on the system; basic stuff to complement the readings and help students stay focused.)

Moving past the technological part, I’d like to eventually do a study to measure the effects of different types of scaffolded exercises in the classroom. Is it better to provide more scaffolding or less? Though one may be faster, do you learn more from doing more work? Or is the code reading of the scaffolding itself an important skill?