It seems like just yesterday Santa got back from his sunny post-Christmas vacation, and now we’re talking about how Santa’s operation can be optimized mathematically. But, even though the Holidays last a few months, Santa’s operation is a year round affair.
In early December, in partnership with Kaggle, a platform for predictive modeling competitions, we asked is it possible to create a job scheduling algorithm that not only meets Santa’s toy target, but keeps his elves healthy and happy? Can Santa’s workshop be brought out of the dark ages of magic and holiday spirit, and transformed into a highly optimized modern operation?
The defending champions of Kaggle's annual holiday competition swept all three prizes in the Helping Santa's Helpers optimization challenge and claimed $20,000. In their own words, courtesy of the Kaggle blog, Marcin Mucha and Marek Cygan of team Master Exploder walk us through their winning approach.
Santa and FICO® Xpress brought team Master Exploder "the power of the lower bounds" for Christmas.
What was your background prior to entering this challenge?
We are both active researchers in the field of algorithmics. We are particularly interested in ways of dealing with computational hardness: mainly approximation algorithms, and in Marek’s case parametrized complexity. We both work at the University of Warsaw.
We also both have a long history of competing in all kinds of programming contest. Quick Topcoder style contest, ACM ICPC, marathons, 24 hour challenges - we have done all of these many times.
What made you decide to enter this competition?
Considering our fixation with programming contest it is not very surprising that we wanted to try our luck with Kaggle’s Christmas Santa challenges. In fact, these contests seem perfect for us. Not only the problems are computationally hard, but also the long duration gives these contest a bit of a research flavor.
The first one we have entered was last year’s “Packing Santa’s Sleigh”. We have had tons of fun competing and in the end we managed to grab the top spot. After that experience, we have been eagerly awaiting the next challenge and it did not disappoint.
What algorithmic approaches have you used?
The data in this challenge was rather large - we had to schedule 10 million toys for 900 elves. This made it really hard to directly optimize the total production time. Instead, based on investigating the specific features of the problem and the data, we designed a high level structure that our solution would have, and optimized pieces of this structure separately. To solve these subproblems, we used Integer Linear Programs (ILP), as much as we could. When we were not able to find a reasonable ILP formulation, or when the problem seemed too simple to use one, we resorted to local search/simulated annealing.
What was your most important insight into the data?
Investigating the data was key to designing a good high level solution structure. However, most of the observations we made were rather straightforward. In contrast to the ML contest, there are no magical features here, no interesting patterns either. The key observation was probably the following:
There was a huge number of large toys and producing those, even at the highest speed rating, would take many years. Since all toys arrive during the first year, this makes arrival times almost irrelevant (except for the first year, which needs to be processed separately). This observation significantly simplifies the problem and gives it a “bin packing flavor”.
Were you surprised by any of your findings?
For a long time we used elves with maximum speed rating to produce the largest toys. Then we saw the trailer for the new Marion Cotillard movie “Two days, one night” and thought: “Wow, this is what our elves should do! Get rid of their fellow elves to get salary bonuses!”. Well, not exactly. This might have been a good idea, but the actual idea that we had was that they should work for two days and one night straight. That is 34 hours, including 20 working hours and 14 non-working hours. As it turns out, this leads to the speed rating only dropping from 4.0 to about 1.3 and is much better than just producing a very large toy.
Which tools did you use?
We started by analyzing the data using R. Then, for non-ILP parts of the solution we used GCC C++ compiler, and for solving the ILPs we used the interactive version of FICO Xpress-Optimizer. We were considering using the C++ solver API, but abandoned that idea - for fast development it is usually better to stick with simplicity. We also used some other tools to analyze the logs generated by our programs and to automate many tasks - mainly standard Unix utilities like grep, sort, uniq etc., but also occasionally awk or python.
How did your experience help you succeed in this competition?
We already mentioned in an earlier answer how these Christmas optimization contests are perfect for us because of our research experience. This year’s contest’s problem was particularly suitable for us as it was amenable to ILP approaches. As we do ILP modeling regularly in our research and have a lot of experience with related techniques and tricks, this might have given us an edge over the field.
What have you taken away from this competition?
$20,000. Just kidding, of course we are going to have to pay the taxes. But more seriously:
Why use ILPs instead of, say, local search algorithms? It is not really about the quality of solution, or how fast they are found. The key advantage of using ILPs is that they do not only give you a solution, but also a lower bound. This makes it so much easier to decide when to stop the solver, it also gives you hints as to where your solution might be improved. In principle we knew that already, but this contest let us really feel the power of the lower bounds.