Skip to main content
MIP Benchmarking: Don't Abuse the Standards

Recently, a performance announcement from a FICO competitor caused a big shakeup in the mathematical optimization community. My colleague Timo Berthold and I wrote a detailed post about MIP benchmarking on the FICO Community blog, but I thought it was worth sharing the gist of it here.

At issue is how developers benchmark the performance of mathematical optimization tools, particularly mixed integer programming (MIP) solvers. The community already has a clear set of standards for MIP benchmarking (see MIPLIB2010), which have evolved over time.

In the case at hand, there were clear issues with the way the competitor generated and discussed their results:

  1. When there is a test set, clearly defined as the “benchmark set”, picking subsets of instances to justify general claims about performance is a bad and misleading practice. It is particularly troubling when statements can be read as if they held for the full set and not only for a subset.  Read more about the MIPLIB2017 benchmark set below, after the bullets.
  2. Even if one was to present comparative results on a subset, those results should (a) be put into context to the results on the full set and (b) it needs to be explicitly named which instances belong to the subset.  Neither happened in this particular case.
  3. Every community has its standard measure for performance. In computational MIP, this is shifted geometric mean(1) of running times. There are a few, let’s say “minor standards”, such as node counts, and number of solved instances for example. However, no one should use a measure for comparison that is non-standard to the community, without explaining it in detail. In the case mentioned above, this was not done consistently in the majority of ongoing communications.
  4. Non-standard measures can be tricky or misleading. In this particular case, the PAR10(2) measure computes a score number, not a speedup factor, since it multiplies some of the involved values with penalty terms. Therefore, a PAR10 score cannot and must not be used to make a statement such as “solver A is x times faster than solver B”, as has happened. PAR10 is not a speed factor. In our opinion, this is a good argument to not use PAR10 for computational MIP in general, since its results do not represent a quantitative statement.
  5. Doing all of the above and publishing a result so far off official numbers, just days before a new benchmark set and results will be published, is bad practice.

What Is FICO's Take on MIP Benchmarking?

When we present comparisons of FICO Xpress Optimization against competitors on MIPLIB or other sets from Hans Mittelmann's benchmark site, FICO always uses the numbers presented there and will continue to do so.

FICO strikes a careful balance between putting effort in to benchmarking and delivering value to customers. We believe in the strength of the mixed-integer programing community to define their standards and we see ourselves as active members of this community. We feel honored that FICO representatives were part of the MIPLIB2010 and MIPLIB2017 committees and we stand by the results of those international research and norm-defining projects.

It is an exciting time for mathematical optimization, and we hope that the great community spirit can be kept up.

For a fuller description, see our original post.

related posts