Solving Problems Using Multiple Threads
It is possible to use multiple processors when solving any type of problem with the Optimizer. On the more common processor types, such as those from Intel or AMD, the Optimizer will detect how many logical processors are available in the system and attempt to solve the problem in parallel using as many threads as possible. The number detected can be read through the CORESDETECTED integer attribute. It is also possible to adjust the number of threads to use by setting the integer parameter THREADS.
By default a problem will be solved deterministically, in the sense that the same solution path will be followed each time the problem is solved when given the same number of threads. For an LP this means that the number of iterations and the optimal, feasible solution returned will always be the same.
When solving a MIP deterministically, each node of the branch–and–bound tree will always be solved the same. Each node of the branch–and–bound tree can be identified by a unique number, available through the attribute CURRENTNODE. The tree will always have the same parent/child relationship in terms of these identifiers. A deterministic MIP solve will always find integer solutions on the same nodes and the attributes and solutions on a node will always be returned the same from one run to another. Since nodes will be solved in parallel the order in which nodes are solved can vary. There is an overhead in synchronizing the threads to make the parallel runs deterministic and it can be faster to run in non–deterministic mode. This can be done by setting the DETERMINISTIC control to 0.
For an LP problem (or the initial continuous relaxation of a MIP), there are several choices of parallelism. Both the barrier algorithm and the dual simplex algorithm support multiple threads. The number of threads to use can be set with BARTHREADS or DUALTHREADS, respectively. It is also possible to run some or all of primal simplex, dual simplex and the Barrier algorithm side–by–side in separate threads, known as a concurrent LP solve. This can be useful when none of the methods is the obvious choice. In this mode, the Optimizer will stop with the first algorithm to solve the problem. The number of threads for the concurrent LP solve can be set using CONCURRENTTHREADS. The algorithms to use for the concurrent solve can be specified by concatenating the required "d", "p", "n" and "b" flags when calling XPRSlpoptimize (LPOPTIMIZE) or XPRSmipoptimize (MIPOPTIMIZE); please refer to section The concurrent solver for more details.
When solving a MIP problem, the Optimizer will try to run the branch and bound tree search in parallel. Use the MIPTHREADS control to set the number of threads specifically for the tree search.
The operation of the optimizer for MIPs is fairly similar in serial and parallel mode. The MIP callbacks can still be used in parallel and callbacks are called when each MIP worker problem is created and destroyed. The mipthread callback (declared with XPRSaddcbmipthread) is called whenever a MIP worker problem is created and the callback declared with XPRSaddcbdestroymt is called whenever the worker problem is destroyed. Each worker problem has a unique ID which can be obtained from the MIPTHREADID integer attribute. When an executing thread solves a branch–and–bound node, it will also do so on a worker problem assigned to it. Note that a given worker problem can be assigned to different threads during its lifetime and the threads might differ from one run to another.
When the MIP callbacks are called they are MUTEX protected to allow non threadsafe user callbacks. If a significant amount of time is spent in the callbacks then it is worth turning off the automatic MUTEX protection by setting the MUTEXCALLBACKS control to 0. It this is done then the user must ensure that their callbacks are threadsafe.
On some problems it is also possible to obtain a speedup by using multiple threads for the MIP solve process between the initial LP relaxation solve and the branch and bound search. The default behavior here is for the Optimizer to use a single thread to create its rounds of cuts and to run its heuristic methods to obtain MIP solutions. Extra threads can be started, dedicated to running the heuristics only, by setting the HEURTHREADS control. By setting HEURTHREADS to a non–zero value, the heuristics will be run in separate threads, in parallel with cutting.
When a MIP solve is terminated early, due to e.g. a time or node limit, it is possible to select between two different termination behaviors. This has implications for the determinism of callbacks called near termination and how quickly the Optimizer stops. In the default behavior, when termination is detected, all work is immediately stopped and any partial node solves are discarded. It is therefore possible that some callbacks will have been called for nodes that are discarded at termination. Note that this termination method does not affect the final state the problem is left in after termination and that any integer solution for which the preintsol and intsol callbacks are called will never be dropped. By setting the control MIPTERMINATIONMETHOD to 1, the termination behavior will be changed such that partial work is never discarded. Instead, all worker threads will be allowed to complete their current work before the solve stops. This termination behavior might cause a longer delay between termination is detected and the Optimizer stops, but it will ensure that work is never dropped for any callbacks that have already been called.
The concurrent solver
The concurrent solve is activated either by passing multiple algorithm flags to XPRSlpoptimize (e.g. "pb" for running primal and the barrier) or by setting CONCURRENTTHREADS to a positive number. The order in which threads are allocated to the algorithms is not affected by the order of the flags provided.
If algorithm flags are specified, then concurrent will run the specified algorithms, if the setting of CONCURRENTTHREADS allows for a sufficient number of threads. When no flags are specified, the automatic order of selecting algorithms starts with dual, followed by barrier and then primal. The network solver is only used if specified by flags.
CONCURRENTTHREADS represents the total target number of threads that can be used by concurrent. The optimizer will then first start dual then barrier (if CONCURRENTTHREADS > 1) followed by primal (if CONCURRENTTHREADS > 2). Any remaining threads will be allocated to parallel barrier.
If manual algorithm selection has been made using algorithm flags, then CONCURRENTTHREADS will limit the number of algorithms started (if smaller than the number of algorithms provided), in which case the number of algorithms started will be the first CONCURRENTTHREADS in the dual → barrier → primal → network order.
Once an algorithm is started, the direct thread controls BARTHREADS and DUALTHREADS are respected. Note that due to the latter controls the total number of theads may exceed CONCURRENTTHREADS.
In case a single algorithm is started and relevant controls are on automatic, the value of the THREADS control is used.
If multiple algorithms have been started and CONCURRENTTHREADS is on automatic, then THREADS will be used as the overall number of threads used in the concurrent (unless overwritten by the relevant algorithm specific control on a per–algorithm basis)
.© 2001-2020 Fair Isaac Corporation. All rights reserved. This documentation is the property of Fair Isaac Corporation (“FICO”). Receipt or possession of this documentation does not convey rights to disclose, reproduce, make derivative works, use, or allow others to use it except solely for internal evaluation purposes to determine whether to purchase a license to the software described in this documentation, or as otherwise set forth in a written software license agreement between you and FICO (or a FICO affiliate). Use of this documentation and the software described in it must conform strictly to the foregoing permitted uses, and no other use is permitted.