Last fifty years of integer linear programming: Recent practical advances (2024)
inria.hal.science204 points by teleforce a day ago
204 points by teleforce a day ago
Could someone maybe give a high-level explanation into why commercial ILP solvers (e.g. Gurobi) are that much better than free/open-source ones? Is it because ILP is inherently that difficult to solve (I know it's NP-hard), that the best solvers are just a large ensemble of heuristics for very specific sub-problems and thus no general "good" strategy has made it's way into the public domain?
It’s mostly that they work closely with clients in a very hands on way to implement problem-specific speedups. And they’ve been doing this for 10-20 years. In the MILP world this means good heuristics (to find good starting points for branch & bound, and to effectively prune the B&B tree), as well as custom cuts (to cut off fractional solutions in a way that effectively improves the objective and solution integrality).
It’s common that when researchers in Operations Research pick a problem, they can often beat Gurobi and other solvers pretty easily by writing their own cuts & heuristics. The solver companies just do this consistently (by hiring teams of PhDs and researchers) and have a battery of client problems to track improvements and watch for regressions.
> the best solvers are just a large ensemble of heuristics for very specific sub-problems
The big commercial solvers have the resources (and the clients interested in helping) to have invested a lot of time in tuning everything in their solves to real-world problems. Heuristics are part of that; recognizing simpler sub-problems or approximations that can be fed back into the full problem is also part.
I think a big part is that the OSS solvers are somewhat hamstrung by the combination of several issues: (1) the barrier to entry in SoTA optimizer development is very high, meaning that there are very few researchers/developers capable of usefully contributing both the mathematical and programming needed in the first place, (2) if you are capable of (1), the career paths that make lots money lead you away from OSS contribution, and (3) the nature of OSS projects means that "customers" are unlikely to contribute back to kind of examples, performance data, and/or profiling that is really needed to improve the solvers.
There are some exceptions to (2), although being outside of traditional commercial solver development doesn't guarantee being OSS (e.g. SNOPT, developed at Stanford, is still commercially licensed). A lot of academic solver work happens in the context of particular applications (e.g. Clarabel) and so tends to be more narrowly focused on particular problem classes. A lot of other fields have gotten past this bottleneck by having a large tech company acquire an existing commercial project (e.g. Mujoco) or fund an OSS project as a means of undercutting competitors. There are narrow examples of this for solvers (e.g. Ceres) but I suspect the investment to develop an entire general-purpose solver stack from scratch has been considered prohibitive.
Commercial solvers have a huge bag of tricks & good pattern detection mechanisms to detect which tricks will likely help the problem at hand.
If you know your problem structure then you can exploit it and it is possible to surpass commercial solver performance. But for a random problem, we stand 0 chance.
> solvers are just a large ensemble of heuristics for very specific sub-problems
Isn't that statement trivially applicable to anything NP-Hard (which ILP is, since it's equivalent to SAT)?
No, good algorithms for NP hard problems can be more than just heuristics.
Modern SAT solvers are a good example of this. CDCL is elegant.
A SAT solver without any preprocessing won't be competitive with SoTA SAT solver.
CDCL is core to the problem, but it is not sufficient. You even have SAT solvers like CryptoMiniSAT that try to detect clauses that encode xor gates so they can use Gaussian Elimination.
This is also true of ILP solvers. Simplex + Branch & Cut is elegant. But that's not how you get to the top.
NP-hard is really hard, but it is hard for (a) polynomial running time, (b) for exact solutions, (c) on worst case problems.
One might suspect that fast enough on specific problems for approximate solutions that still make/save a lot of money might also be welcome. Ah, perhaps not!
E.g., in NYC, two guys had a marketing resource allocation problem, tried simulated annealing, and ran for days before giving up.
They sent me the problem statement via email, and in one week I had the software written and in the next week used the IBM OSL (Optimization Subroutine Library) and some Lagrangian relaxation. In 500 primal-dual iterations with
600,000 variables
40,000 constraints
found a feasible solution within 0.025% of optimality.
So, I'd solved their problem (for practical purposes, the 0.025% has to count as a solving) for free.
They were so embarrassed they wanted nothing to do with me. We never got to where I set a price for my work.
The problem those two guys had was likely that, if they worked with me, then I would understand their customers and, then, beat the guys and take their customers. There in NYC, that happened a second time.
If a guy is in, say, the auto business, and needs a lawyer, the guy might want the best lawyer but will not fear that the lawyer will enter the auto business as a powerful competitor. Similarly for a good medical doctor.
For an optimization guy saving, say, 5% of the operating costs of a big business, say, $billion in revenue a year, all the management suite will be afraid of the guy getting too much power and work to get him out -- Goal Subordination 101 or just fighting to retain position in the tribe.
After having some grand successes in applied math where other people had the problem but then being afraid that I would be too powerful, I formulated:
If some technical, computing, math, etc. idea you have is so valuable, then start your own business exploiting that idea -- of course, need a suitable business for the idea to be powerful.
> If a guy is in, say, the auto business, and needs a lawyer, the guy might want the best lawyer but will not fear that the lawyer will enter the auto business as a powerful competitor. Similarly for a good medical doctor.
> For an optimization guy saving, say, 5% of the operating costs of a big business, say, $billion in revenue a year, all the management suite will be afraid of the guy getting too much power and work to get him out -- *Goal Subordination 101 or just fighting to retain position in the tribe.
The optimization guy will also not have the infrastructure to compete with the big business. Also the optimization guy will likely not fight for the management position (not every great applied mathematician is a great manager (in my opinion in particular because leadership of employees and office politics are very different skills)).
So, there is no competition: simply pay the optimization guy a great salary and somewhat isolate him from the gory office politics - problem solved, everybody will live in peace.
But this is not what happens in your example; so the only reason that I can imagine is the usual, irrational bullying of nerds that many nerds know from the schoolyard.
I would argue that not spending money on it and showing to upper mgmt that the folks they hired can actually get the job done often contributes to an external contractor not getting hired.
You might want to question, why didn‘t you ask the guys for money before starting to work for them. True, but I guess they were of the kind, show me results and then maybe we move on. On the other side, a 30-40k pilot project in this area is not difficult to negotiate if you‘re patient.
It takes so much more to running a business than lower cost with clever math that this step often comes at a later stage when larger companies look for ways to stay competitive, which is when they start to take a look at their accounts and figure that certain cost really stack up. Then you come in. The only real power you would have gotten over that company would have been for those guys getting fired and replaced by a vendor - ideally that‘s you!
scale and speed. for example, most quant trading firms run huge optimizations as often as possible. open-source solver often cannot even solve the problems (OOM exceptions, etc)
In most MILP domains, the underlying engineering know-how is more critical than mathematical formulations or CS coding: (that's why most OR groups operate independently of math or CS departments).
OSS never took off among professional engineers because they've have "skin in the game", unlike math and CS folks who just reboot, and pretend nothing is wrong.
I vaguely recall building a resource allocation tool using IBM's "ILOG" mixed integer linear programming library and we realised that if we'd built the tool about 20 years earlier it would have still been running for the same problems we were now solving within 5 minutes.
As I recall it the raw computer power had increased by a factor of around a thousand and the algorithms had improved by about the same, giving us a factor of a million improvement.
Worth pondering when trying to predict the future!
The "resources" in question were diamonds by the way...
Can anyone share how this is used in practice? Somehow I imagine implementing numerical optimization often fails due to the usual problems with data-driven approaches (trust, bad data, etc.) and ultimately someone important just decides how things are going to be done based on stomach feel.
We use solvers throughout the stack at work: solvers to schedule home batteries and EVs in peoples homes optimally, solvers to schedule hundreds of thousands of those homes optimally as portfolios, solvers to trade that portfolio optimally.
The EU electricity spot price is set each day in a single giant solver run, look up Euphemia for some write ups of how that works.
Most any field where there is a clear goal to optimise and real money on the line will be riddled with solvers
FMCG company here. We use these in practice of:
1. Salesman & delivery travel plan
2. Machine, Human and material resource scheduling for production
3. Inventory level for warehouse distribution center. This one isn't fully automatic because demand forecasting is hard
have a read through the case studies:
gurobi case studies: https://www.gurobi.com/case_studies/
some cplex case studies: https://www.ibm.com/products/ilog-cplex-optimization-studio/...
hexaly (formerly localsolver) case studies: https://www.hexaly.com/customers
"... between 1988 and 2004, hardware got 1600 times faster, and LP solvers got 3300 times faster, allowing for a cumulative speed-up factor higher than 5 × 106, and that was already 20 years ago!"
"The authors observed a speedup of 1000 between [the commercial MILP solvers of] 2001 and 2020 (50 due to algorithms, 20 due to faster computers)."
I wonder if we can collect these speedup factors across computing subfields, decomposed by the contribution of algorithmic improvements, and faster computers.
In compilers, there's "Proebsting's Law": compiler advances double computing power every 18 years.
I've heard Gurobi is fairly expensive. Anyone willing to share pricing details?
I can't share pricing details since they are confidential but if you just want to play with MIP you don't need to buy one of the big three (XPRESS, Gurobi, CPLEX) which are all very expensive but usually available for free for students. There are at least two good open source / free for non-commercial use MIP solvers available:
How do those stack up against lp_solve (https://lpsolve.sourceforge.net/5.5/index.htm)?
Waaaaay faster.
I've used both. They are waaaaaaaaaay faster, waaaay more reliable, and actually have support. You're not going to want to run your product that is responsible for millions off of something without really solid support.
You can get a temporary free license for Gurobi. You are limited to a 1000 node problem size, but you can learn how to use the tool and set up your problem.
If you have a problem that needs Gurobi, it’s worth paying for it. Talk with their sales team. They are happy to help you get started. They know once you know how to use it, and how it can solve problems you will be inclined to use it in the future.
> If you have a problem that needs Gurobi, it’s worth paying for it.
Thit statement is baed on the assumption that it is a "big money" problem. On the other hand, I know lots of problems interesting to nerds for which Gurobi would help (but nerds don't have the money).
If you have a "nerdy" problem you can probably get someone to write it up as a research paper and then it would easily fall under the academic license. To some extent, if you're buying a commercial license you're just paying for secrecy.
What I've heard - and obviously I can't confirm this - is that their only pricing tier is "call us" - at which point they try to figure out how much money you're making and ask for a slice of it.
It’s much cheaper than making suboptimal decisions slowly. Free solvers are fine for small problems (GLPK, for example), but lots of business problems are pretty much impossible to solve in the timeframe required unless you fork over cash for a premium solver (Gurobi being the best).
The last time I checked about a decade ago, a full license with multiple users and on a server was around 100k USD. I don't recall exact number of seats or server count restrictions.
I want to add that, for many in the industry, it is well worth the price.
The best MIP solvers (CPLEX, GUROBI, FICO) are all extremely expensive unless you're an academic. The free ones are fine for smaller problems. Some like Mosek are quite affordable and a good middle ground. To most organizations, the cost is reasonable for what they're getting.
I wish somebody in this thread would quantify "extremely expensive". So many messages and we still have no idea how much they cost.
Estimate something in the ballpark of 10.000-15.000 USD per seat for a core-limited license (see also https://news.ycombinator.com/item?id=44277010), and quite much more money for a network license.
Data point: https://www.solver.com/gurobi-solver-engine-lpmip-software A plugin for some software that contains a Gurobi license specific to this product costs 10.500 USD.
I don't know why people think it's such a deeply shrouded secret - it's ~10k a seat for a core-limited license.
Heh, given all of the whispering, I was imagining something 10x the price. I am a nobody and have at least one license to a different product that is some $13k annual.
It's not cheap but actually quite reasonable and the quality is very good vs free solvers. If you are building a product that needs MILP it's worth it.
> If you are building a product that needs MILP it's worth it.
Rather: if you are building a product that will for sure make a lot of money, and needs MILP, it's worth it.
A lot of product concepts that nerds create are very innovate, but are often some private side projects.
I remember implementing some version of Gomory cutting hyperplanes back in the 1990s in Maple (for learning, not for production.) Looks like the field has progressed...
> if we needed two months of running time to solve an LP in the early 1990s, we would need less than one second today. Recently, Bixby compared the machine-independent performance of two MILP solvers, CPLEX and Gurobi, between 1990 and 2020 and reported speed-ups of almost 4×10^6.
It feels like there’s a significant lack of “ML/AI” based approaches applied to these kinds of problems. I’ve seen a lot of example of RL/GNN papers that do attempt to solve smaller problems but it always feels like the best option is to just pay for a gurobi license and have at it. I’ve been doing some scheduling optimisation recently (close to job shop scheduling) and while there’s some examples of using RL they just don’t seem to cut it. I’ve resorted to evolutionary algorithms to get reasonable solutions to some big problems. Maybe it’s just always more efficient to using OR type approaches when you can formulate the problem well.
It depends on the problem. The security contained unit commitment problem (how you figure out which power plants to turn on when) is an unbelievably complex problem that MILP solvers like Gurobi can find globally optimal solutions (within the bounds of the MIP gap) quickly. Sure you could create a genetic algorithm, but there is no guarantee it will give you an answer that isn't stuck in a local minima. That is assuming you can make it run fast. Neural networks are also going to be sub optimal.
SAT is a standard GOFAI problem and you can of course use any programming language in the ML family to write a SAT solver. Thus I'd say that "ML/AI" approaches are, if anything, quite applicable!
title could use [pdf] [2024]
the link does not point to a pdf, it points to an abstract
Unless the OP meant to post specifically the abstract, which I very much doubt, the content submitted is the PDF linked. That said, if that's how the [pdf] tag is meant to be used on this forum, I could understand. Would just also leave me moderately annoyed & wondering why the tag isn't automated then, since that'd be automatable.
[pdf] implies that the link directly downloads a pdf.
Then I am hereby indeed officially moderately annoyed & wondering.
People can read the abstract and then decide if they want to go deeper and also download and read the PDF. I’m sure that many only read the abstract.
Furthermore, depending on publishing site, a paper may also be available as HTML rendered from the LaTeX source, in addition to PDF. (If the page does not now, it may in the future.)
The purpose of a [PDF] tag is to warn about possible unsuitability of the linked resource for mobile consumption (which isn’t the case for the article page linked here), possible download size (though maybe not anymore, nowadays), and possible brightness shock when using dark mode.
> Unless the OP meant to post specifically the abstract, which I very much doubt, ...
On the few occasions I post submissions like this I specifically and deliberately, where possible, post links to abstracts, not the actual papers. People can then skim the abstract and decide whether or not to go further.
You can just add the reference to the paper: https://inria.hal.science/hal-04776866v1/document
Integer linear programming doesn't sound very complex.
Graph vertex 3-colouring (G3C) is NP and NP-Hard, therefore NP-Complete (NPC).
There is a result that say that if you can solve general ILP problems then you can solve general G3C.
Satisfiability is NP and NP-Hard, therefore NP-Complete (NPC). It is therefore equivalent (under some definition of "equivalent") to G3C.
There is a result that say that if you can solve general ILP problems then you can solve general G3C.
There is a known result that if you can solve arbitrary G3C problems then you can factor integers. While the problem of factoring integers (FAC) is not NPC, clearly factoring integers is very important in today's computing environment.
So if you can solve arbitrary ILP problems you can break several current encryption algorithms that are thought to be secure.
So we can deduce that ILP is a fairly tricky problem to solve.
The thing that fools a lot of people is that random instances of NPC problems tend to be easy. The core of difficult instances gets smaller (in relative terms) as the problems get bigger, so if, say, you pick a random graph, it's probably trivial to find a vertex 3-colouring, or show that none such exists.
You can encode travelling salesman as an ILP problem, so it’s a pretty tricky problem.
This is actually a pretty poor example because we can solve huge TSP instances to optimality in practice (see the concorde solver). There exist many more tricky Combinatorial problems such as packing or covering problems that are typically much harder to solve to optimality.
It's harder than linear programming though.
It's substantially harder than linear programming: it's equivalent to SAT, whereas linear programming is merely polynomial-time (and in practice weakly polynomial-time with current algorithms).
I normally use Simplex method which is fast and not polynomial in the worst case though
You can always just run a portfolio of Simplex/Barrier/PDLP and just grab whichever returns something first. The latter two are usually slower but polynomial time, so you always win.
Can't do that with SAT or ILP.
You have to find the integers that fulfill a certain condition best. That's fundamentally different from real numbers. It looks exactly like other numerical problems, but there's no general solution for it, only (very good) heuristics for specific classes.
Even continuous linear programming wasn't known to be in P-time until the 1980s or so.