Advanced Usage
Problem Names
Problems loaded in the Optimizer have a name. The name is either taken from the file name if the problem is read into the optimizer or it is specified as a string in a function call when a problem is loaded into the Optimizer using the library interface. Once loaded the name of the problem can be queried and modified. For example, the library provides the function XPRSsetprobname for changing the name of a problem.
When reading a problem from a matrix file the user can optionally specify a file extension. The search order used for matrix files in the case where the file extension is not specified is described in the reference for the function XPRSreadprob. In this case, the problem name becomes the file name, including the full path, but without the file extension.
Note that matrix files can be read directly from a gzip compressed file. Recognized names of matrix files stored with gzip compression have an extension that is one of the usual matrix file format extensions followed by the .gz extension. For example, hpw15.mps.gz.
The problem name is used as a default base name for the various file system interactions that the Optimizer may make when handling a problem. For example, when commanded to read a basis file for a problem and the basis file name is not supplied with the read basis command the Optimizer will try to open a file with the problem name appended with the .bss extension.
It is useful to note that the problem name can include file system path information. For example, c:/matrices/hpw15. Note the use of forward slashes in the Windows path string. It is recommended that Windows users use forward slashes as path delimiters in all file name specifications for the Optimizer since (i) this will work in all situations and (ii) it avoids any problems with the back slash being interpreted as the escape character.
Manipulating the Matrix
In general, the basic usage of the FICO Xpress Optimizer described in the previous chapters will be sufficient for most users' requirements. Using the Optimizer in this way simply means load the problem, solve the problem, get the results and finish.
In some cases, however, it is required that the problem is first solved, then modified, and solved again. We may want to do this, for example, if a problem was found to be infeasible. In this case, to find a feasible subset of constraints we iteratively remove some constraints and re–solve the problem. Another example is when a user wants to 'generate' columns using the optimal duals of a 'restricted' LP problem. In this case we will first need to load a problem and then we will need to add columns to this problem after it has been solved.
For library users, FICO Xpress provides a suite of functions providing read and modify access to the matrix.
Reading the Matrix
The Optimizer provides a suite of routines for read access to the optimization problem including access to the objective coefficients, constraint right hand sides, decision variable bounds and the matrix coefficients.
It is important to note that the information returned by these functions will depend on whether or not the problem has been run through an optimization algorithm or if the problem is currently being solved using an optimization algorithm, in which case the user will be calling the access routines from a callback (see section Using the Callbacks for details about callbacks). Note that the dependency on when the access routine is called is mainly due to the way "presolve" methods are applied to modify the problem. How the presolve methods affect what the user accesses through the read routines is discussed in section Working with Presolve.
The user can access the names of the problem's constraints, or 'rows', as well as the decision variables, or 'columns', using the XPRSgetnames routine.
The linear coefficients of the problem constraints can be read using XPRSgetrows. Note that for the cases where the user requires access to the linear matrix coefficients in the column–wise sense the Optimizer includes the XPRSgetcols function. The type of the constraint, the right hand side and the right hand side range are accessed using the functions XPRSgetrowtype, XPRSgetrhs and XPRSgetrhsrange, respectively.
The coefficients of the objective function can be accessed using the XPRSgetobj routine, for the linear coefficients, and XPRSgetqobj for the quadratic objective function coefficients. The type of a column (or decision variable) and its upper and lower bounds can be accessed using the routines XPRSgetcoltype, XPRSgetub and XPRSgetlb, respectively.
The quadratic coefficients in constraints can be accessed either in matrix form, using the XPRSgetqrowqmatrix routine, or as a list of quadratic coefficients with the XPRSgetqrowqmatrixtriplets.
Note that the reference section in Chapter Console and Library Functions of this manual provides details on the usage of these functions.
Modifying the Matrix
The Optimizer provides a set of routines for manipulating the problem data. These include a set of routines for adding and deleting problem constraints ('rows') and decision variables ('columns'). A set of routines is also provided for changing individual coefficients of the problem and for changing the types of decision variables in the problem.
Rows and columns can be added to a problem together with their linear coefficients using XPRSaddrows and XPRSaddcols, respectively. Rows and columns can be deleted using XPRSdelrows and XPRSdelcols, respectively.
The Optimizer provides a suite of routines for modifying the data for existing rows and columns. The linear matrix coefficients can be modified using XPRSchgcoef (or use XPRSchgmcoef if a batch of coefficients are to be changed). Row and column types can be changed using the routines XPRSchgrowtype and XPRSchgcoltype, respectively. Right hand sides and their ranges may be changed with XPRSchgrhs and XPRSchgrhsrange. The linear objective function coefficients may be changed with XPRSchgobj while the quadratic objective function coefficients are changed using XPRSchgqobj (or use XPRSchgmqobj if a batch of coefficients are to be changed). Likewise, quadratic coefficients in constraints are changed with XPRSchgqrowcoeff.
Examples of the usage of all the above functions and their syntax may be found in the reference section of this manual in Chapter Console and Library Functions.
Finally, it is important to note that it is not possible to modify a matrix when it has been 'presolved' and has not been subsequently 'postsolved'. The following section Working with Presolve discusses some important points concerning reading and modifying a problem that is "presolved".
Working with Presolve
The Optimizer provides a number of algorithms for simplifying a problem prior to the optimization process. This elaborate collection of procedures, known as presolve, can often greatly improve the Optimizer's performance by modifying the problem matrix, making it easier to solve. The presolve algorithms identify and remove redundant rows and columns, reducing the size of the matrix, for which reason most users will find it a helpful tool in reducing solution times. However, presolve is included as an option and can be disabled if not required by setting the PRESOLVE control to 0. Usually this is set to 1 and presolve is called by default.
For some users the presolve routines can result in confusion since a problem viewed in its presolved form will look very different to the original model. Under standard use of the Optimizer this may cause no difficulty. On a few occasions, however, if errors occur or if a user tries to access additional properties of the matrix for certain types of problem, the presolved values may be returned instead. In this section we provide a few notes on how such confusion may be best avoided. If you are unsure if the matrix is in a presolved state or not, check the PRESOLVESTATE attribute
It is important to note that when solving a problem with presolve on, the Optimizer will take a copy of the matrix and modify the copy. The original matrix is therefore preserved, but will be inaccessible to the user while the presolved problem exists. Following optimization, the whole matrix is automatically postsolved to recover a solution to the original problem and restoring the original matrix. Consequently, either before or after, but not during, a completed optimization run, the full matrix may be viewed and altered as described above, being in its original form.
A problem might be left in a presolved state if the solve was interrupted, for example due to the Ctrl–C key combination, or if a time limit (set by MAXTIME) was reached. In such a case, the matrix can always be returned to its original state by calling XPRSpostsolve (POSTSOLVE). If the matrix is already in the original state then XPRSpostsolve (POSTSOLVE) will return without doing anything.
While a problem is in a presolved state it is not possible to make any modifications to it, such as adding rows or columns. The problem must first be returned to its original state by calling XPRSpostsolve before it can be changed.
(Mixed) Integer Programming Problems
If a model contains global entities, integer presolve methods such as bound tightening and coefficient tightening are applied to tighten the LP relaxation. As a simple example of this might be if the matrix has a binary variable x and one of the constraints of the matrix is x≤0.2. It follows that x can be fixed at zero since it can never take the value 1. If presolve uses the global entities to alter the matrix in this way, then the LP relaxation is said to have been tightened. For Console users, notice of this is sent to the screen; for library users it may be sent to a callback function, or printed to the log file if one has been set up. In such circumstances, the optimal objective function value of the LP relaxation for a presolved matrix may be different from that for the unpresolved matrix.
The strict LP solution to a model with global entities can be obtained by calling the XPRSlpoptimize (LPOPTIMIZE) command. This removes the global constraints from the variables, preventing the LP relaxation from being tightened and solves the resulting matrix. In the example above, x would not be fixed at 0, but allowed to range between 0 and 0.2.
When XPRSmipoptimize (GLOBAL) finds an integer solution, it is postsolved and saved in memory. The solution can be read with the XPRSgetmipsol function. A permanent copy can be saved to a solution file by calling XPRSwritebinsol (WRITEBINSOL), or XPRSwriteslxsol (WRITESLXSOL) for a simpler text file. This can be retrieved later by calling XPRSreadbinsol (READBINSOL) or XPRSreadslxsol (READSLXSOL), respectively.
After calling XPRSmipoptimize (MIPOPTIMIZE), the matrix will be postsolved whenever the MIP search has completed. If the MIP search hasn't completed the matrix can be postsolved by calling the XPRSpostsolve (POSTSOLVE) function.
Working with LP Folding
In addition to presolve procedures, the Optimizer provides an algorithm called LP folding that can further simplify LP problems. The LP folding is applicable to LP problems that can be partitioned into equitable partitions, and it works by aggregating matrix columns of equitable partitions and then reducing the problem size.
Solutions for the folded problem are also valid for the original problem. While it is straightforward to transfer a solution from the folded problem to the original problem, it is non-trivial to do so for the basis. When an LP problem is solved to optimality and a basis is needed, the LP unfolding will use the crossover algorithm to provide one. When the folded LP problem is unbounded or infeasible, or when the solving process is stopped due to time or iteration limit, the basis will not be available. Please note that LP folding tends to provide solutions with a larger support (number of variables that are not at any of their bounds).
LP folding is applied automatically when appropriate. It can be enabled or disabled by setting the LPFOLDING control.
Working with Heuristics
The Optimizer contains several primal heuristics that help to find feasible solutions during a global search. These heuristics fall broadly into one of three classes:
- Simple rounding heuristics
These take the continuous relaxation solution to a node and, through simple roundings of the solution values for global entities, try to construct a feasible MIP solution. These are typically run on every node. - Diving heuristics
These start from the continuous relaxation solution to a node and combines rounding and fixing of global entities with occasional reoptimization of the continuous relaxation to construct a better quality MIP solution. They are run frequently on both the root node and during the branch and bound tree search. - Local search heuristics
The local search heuristics are generally the most expensive heuristics and involve solving one or more smaller MIPs whose feasible regions describe a neighborhood around a candidate MIP solution. These heuristics are run at the end of the root solve and typically on every 500–1000 nodes during the tree search.
Some simple heuristics and a few fast diving heuristics, which do not require a starting solution, will be tried before the initial continuous relaxation of a MIP is solved. On very simple problems, it is possible that an optimal MIP solution will be found at this point, which can lead to the initial relaxation being cut off. These heuristics can be enabled or disabled using the HEURBEFORELP control.
There are a few controls that affect all of the heuristics:
- HEURSTRATEGY
- Determines the level of heuristics to use. A value of 3 will allow all heuristics to be run and a value of 1 will only allow the faster rounding and diving heuristics to be run. Setting HEURSTRATEGY to 0 will disable all heuristics.
- HEURTHREADS
- The number of additional heuristic threads to start in parallel with cutting on the root node. If set to zero, heuristics will be run interleaved with cutting.
The simple rounding heuristics do not have any controls associated with them. The diving heuristics have the following controls:
- HEURFREQ
- The frequency at which to run a diving heuristic during the branch and bound tree search. If HEURFREQ=k, a diving heuristic will be applied when at least k nodes of the tree search have been solved since the last run. Set this control to zero to disable diving heuristics during the tree search. With a default setting of -1, the Optimizer will automatically select a frequency that depends on how expensive it is to run and how many integer variables need to be rounded. Typically, this results in a diving heuristic being run for every 10–50 nodes.
- HEURDIVESTRATEGY
- Can be used to select one specific out of 10 predefined diving strategies, otherwise the Optimizer will automatically select which appears to work best. Set this control to zero to disable the diving heuristic.
- HEURDIVERANDOMIZE
- How much randomization to introduce into the diving heuristics.
- HEURDIVESPEEDUP
- The amount of effort to put into the individual dives. This essentially determines how often the continuous relaxation is reoptimized during a dive.
The local search heuristics have the following controls:
- HEURSEARCHFREQ
- The frequency at which to run the local search heuristics during the branch and bound tree search. If HEURSEARCHFREQ=k, the local search heuristics will be run when at least k nodes of the tree search have been solved since the last run.
- HEURSEARCHEFFORT
- Determines the complexity of the local search MIP problems solved and, if HEURSEARCHFREQ=-1, also how often they are applied.
- HEURSEARCHROOTSELECT
- Selects which local search heuristics are allowed to be run on the root node. Each bit of this integer control represents an individual heuristic.
- HEURSEARCHTREESELECT
- Selects which local search heuristics are allowed to be run during the branch and bound tree search.
Common Causes of Confusion
It should be noted that most of the library routines described above and in chapter Console and Library Functions, which modify the matrix will not work on a presolved matrix. The only exception is inside a callback for a MIP solve, where cuts may be added or variable bounds tightened (using XPRSchgbounds). Any of these functions expect references to the presolved problem. If one tries to retrieve rows, columns, bounds or the number of these, such information will come from the presolved matrix and not the original. A few functions exist which are specifically designed to work with presolved and scaled matrices, although care should be exercised in using them. Examples of these include the commands XPRSgetpresolvesol, XPRSgetpresolvebasis, XPRSgetscaledinfeas, XPRSloadpresolvebasis and XPRSloadpresolvedirs.
Using the Callbacks
Console users are constantly provided with information on the standard output device by the Optimizer as it searches for a solution to the current problem. The same output is also available to library users if a log file has been set up using XPRSsetlogfile. However, whilst Console users can respond to this information as it is produced and allow it to influence their session, the same is not immediately true for library users, since their program must be written and compiled before the session is initiated. For such users, a more interactive alternative to the above forms of output is provided by the use of callback functions.
The library callbacks are a collection of functions which allow user–defined routines to be specified to the Optimizer. In this way, users may define their own routines which should be called at various stages during the optimization process, prompting the Optimizer to return to the user's program before continuing with the solution algorithm. Perhaps the three most general of the callback functions are those associated with the search for an LP solution. However, the vast majority of situations in which such routines might be called are associated with the global search, and will be addressed below.
Output Callbacks
Instead of catching the standard output from the Optimizer and saving it to a log file, the callback XPRSaddcbmessage allows the user to define a routine which should be called every time a text line is output by the Optimizer. Since this returns the status of each message output, the user's routine could test for error or warning messages and take appropriate action accordingly.
LP Callbacks
The functions XPRSaddcblplog and XPRSaddcbbarlog allow the user to respond after each iteration of either the simplex or barrier algorithms, respectively. The controls LPLOG and BAROUTPUT may additionally be set to reduce the frequency at which these routines should be called.
Global Search Callbacks
When a problem with global entities is to be optimized, a large number of sub problems, called nodes, must typically be solved as part of the global tree search. At various points in this process user–defined routines can be called, depending on the callback that is used to specify the routine to the Optimizer.
In a global tree search, the Optimizer starts by selecting an active node amongst all candidates (known as a full backtrack) and then proceed with solving it, which can lead to new descendent nodes being created. If there is a descendent node, the optimizer will by default select one of these next to solve and repeat this iterative descend while new descendent nodes are being created. This dive stops when it reaches a node that is found to be infeasible or cutoff, at which point the Optimizer will perform a full backtrack again and repeat the process with a new active node.
A routine may be called whenever a node is selected by the optimizer during a full backtrack, using XPRSaddcbchgnode. This will also allow a user to directly select the active node for the optimizer. Whenever a new node is created, a routine set by XPRSaddcbnewnode will be called, which can be used to record the identifier of the new node, e.g. for use with XPRSaddcbchgnode.
When the Optimizer solves a new node, it will first call any routine set by XPRSaddcbprenode, which can be used to e.g. tighten bounds on columns (with XPRSchgbounds) as part of a user node presolve. Afterwards, the LP relaxation of the node problem is solved to obtain a feasible solution and a best bound for the node. This might be followed by one or more rounds of cuts. If the node problem is found to be infeasible or cutoff during this process, a routine set by XPRSaddcbinfnode will be called. Otherwise, a routine set by XPRSaddcboptnode will be called to let the user know that the optimizer now has an optimal solution to the LP relaxation of the node problem. In this routine, the user is allowed to add cuts (see section Working with the Cut Manager) and tighten bounds to tighten the node problem, or apply branching objects (see XPRS_bo_create) to separate on the current node problem. If the user modifies the problem inside this optnode callback routine, the optimizer will automatically resolve the node LP and, if the LP is still feasible, call the optnode routine again.
If the LP relaxation solution to the node problem also satisfies all global entities and the user has not added any branching objects, i.e., if it is a MIP solution, the Optimizer will call a routine set by XPRSaddcbpreintsol before saving the new solution, and call a routine set by XPRSaddcbintsol after saving the solution. These two routines will also be called whenever a new MIP solution is found using one of the Optimizer heuristics.
Otherwise, if the node LP solution does not satisfy the global entities (or any user branching objects), the Optimizer will proceed with branching. After the optimizer has selected the candidate entity for branching, a routine set by XPRSaddcbchgbranch will be called, which also allows a user to change the selected candidate. If, during the candidate evaluation the optimizer discovers that e.g. bounds can be tightened, it will tighten the node problem and go back to resolving the node LP, followed by the callback routines explained above.
When the Optimizer finds a better MIP solution, it is possible that some of the nodes in the active nodes pool are cut off due to having an LP solution bound that is worse than the new cutoff value. For such nodes, a routine set by XPRSaddcbnodecutoff will be called and the node will be dropped from the active nodes pool.
The final global callback, XPRSaddcbgloballog, is more similar to the LP search callbacks, allowing a user's routine to be called whenever a line of the global log is printed. The frequency with which this occurs is set by the control MIPLOG.
Working with the Cut Manager
Cuts and the Cut Pool
Solving the LP relaxations during a global search is often made more efficient by supplying additional rows (constraints) to the matrix which reduce the size of the feasible region, whilst ensuring that it still contains an optimal integer solution. Such additional rows are called cutting planes, or cuts.
By default, cuts are automatically added to the matrix by the Optimizer during a global search to speed up the solution process. However, for advanced users, the Optimizer library provides greater freedom, allowing the possibility of choosing which cuts are to be added at particular nodes, or removing cuts entirely. The cutting planes themselves are held in a cut pool, which may be manipulated using library functions.
Cuts may be added directly to the matrix at a particular node, or may be stored in the cut pool first before subsequently being loaded into the matrix. It often makes little difference which of these two approaches is adopted, although as a general rule if cuts are cheap to generate, it may be preferable to add the cuts directly to the matrix and delete any redundant cuts after each sub–problem (node) has been optimized. Any cuts added to the matrix at a node and not deleted at that node will automatically be added to the cut pool. If you wish to save all the cuts that are generated, it is better to add the cuts to the cut pool first. Cuts can then be loaded into the matrix from the cut pool. This approach has the advantage that the cut pool routines can be used to identify duplicate cuts and save only the stronger cuts.
To help track the cuts that have been added to the matrix at different nodes, the cuts can be classified according to a user–defined cut type. The cut type can either be a number such as the node number or it can be a bit map. In the latter case each bit of the cut type may be used to indicate a property of the cut. For example, cuts could be classified as local cuts applicable at the current node and its descendants, or as global cuts applicable at all nodes. If the first bit of the cut type is set this could indicate a local cut and if the second bit is set this could indicate a global cut. Other bits of the cut type could then be used to signify other properties of the cuts. The advantage of using bit maps is that all cuts with a particular property can easily be selected, for example all local cuts.
Cut Management Routines
Cuts may be added directly into the matrix at the current node using XPRSaddcuts. Any cuts added to the matrix at a node will be automatically added to the cut pool and hence restored at descendant nodes unless specifically deleted at that node, using XPRSdelcuts. Cuts may be deleted from a parent node which have been automatically restored, as well as those added to the current node using XPRSaddcuts, or loaded from the cut pool using XPRSloadcuts.
It is recommended to delete only those cuts with basic slacks. Otherwise, the basis will no longer be valid and it may take many iterations to recover an optimal basis. If the second argument to XPRSdelcuts is set to 1, this will ensure that cuts with non–basic slacks will not be deleted, even if the other controls specify that they should be. It is highly recommended that this is always set to 1.
Cuts may be saved directly to the cut pool using the function XPRSstorecuts. Since cuts added to the cut pool are not automatically added to the matrix at the current node, any such cut must be explicitly loaded into the matrix using XPRSloadcuts before it can become active. If the third argument of XPRSstorecuts is set to 1, the cut pool will be checked for duplicate cuts with a cut type identical to the cuts being added. If a duplicate cut is found, the new cut will only be added if its right hand side value makes the cut stronger. If the cut in the cut pool is weaker than the added cut, it will be removed unless it has already been applied to active nodes of the tree. If, instead, this argument is set to 2, the same test is carried out on all cuts, ignoring the cut type. The routine XPRSdelcpcuts allows the user to remove cuts from the cut pool, unless they have already been applied to active nodes in the branch and bound tree.
A list of cuts in the cut pool may be obtained using the command XPRSgetcpcuts, whilst XPRSgetcpcutlist returns a list of their indices. A list of those cuts which are active at the current node is returned using XPRSgetcutlist.
User Cut Manager Routines
Users may also write their own cut manager routines to be called at various points during the branch and bound search. The routine XPRSaddcboptnode can be used to set a callback function (see section Global Search Callbacks) that allows cuts to be added or removed during the branch and bound search.
Further details of these functions may be found in chapter Console and Library Functions within the functional reference which follows.
Solving Problems Using Multiple Threads
It is possible to use multiple processors when solving any type of problem with the Optimizer. On the more common processor types, such as those from Intel or AMD, the Optimizer will detect how many logical processors are available in the system and attempt to solve the problem in parallel using as many threads as possible. The number detected can be read through the CORESDETECTED integer attribute. It is also possible to adjust the number of threads to use by setting the integer parameter THREADS.
By default a problem will be solved deterministically, in the sense that the same solution path will be followed each time the problem is solved when given the same number of threads. For an LP this means that the number of iterations and the optimal, feasible solution returned will always be the same.
When solving a MIP deterministically, each node of the branch-and-bound tree will always be solved the same. Each node of the branch–and–bound tree can be identified by a unique number, available through the attribute CURRENTNODE. The tree will always have the same parent/child relationship in terms of these identifiers. A deterministic MIP solve will always find integer solutions on the same nodes and the attributes and solutions on a node will always be returned the same from one run to another. Since nodes will be solved in parallel the order in which nodes are solved can vary. There is an overhead in synchronizing the threads to make the parallel runs deterministic and it can be faster to run in non–deterministic mode. This can be done by setting the DETERMINISTIC control to 0.
For an LP problem (or the initial continuous relaxation of a MIP), there are several choices of parallelism. Both the barrier algorithm and the dual simplex algorithm support multiple threads. The number of threads to use can be set with BARTHREADS or DUALTHREADS, respectively. It is also possible to run some or all of primal simplex, dual simplex and the Barrier algorithm side–by–side in separate threads, known as a concurrent LP solve. This can be useful when none of the methods is the obvious choice. In this mode, the Optimizer will stop with the first algorithm to solve the problem. The number of threads for the concurrent LP solve can be set using CONCURRENTTHREADS. The algorithms to use for the concurrent solve can be specified by concatenating the required "d", "p", "n" and "b" flags when calling XPRSlpoptimize (LPOPTIMIZE) or XPRSmipoptimize (MIPOPTIMIZE); please refer to section The concurrent solver for more details.
When solving a MIP problem, the Optimizer will try to run the branch and bound tree search in parallel. Use the MIPTHREADS control to set the number of threads specifically for the tree search.
The operation of the optimizer for MIPs is fairly similar in serial and parallel mode. The MIP callbacks can still be used in parallel and callbacks are called when each MIP worker problem is created and destroyed. The mipthread callback (declared with XPRSaddcbmipthread) is called whenever a MIP worker problem is created and the callback declared with XPRSaddcbdestroymt is called whenever the worker problem is destroyed. Each worker problem has a unique ID which can be obtained from the MIPTHREADID integer attribute. When an executing thread solves a branch-and-bound node, it will also do so on a worker problem assigned to it. Note that a given worker problem can be assigned to different threads during its lifetime and the threads might differ from one run to another.
When the MIP callbacks are called they are MUTEX protected to allow non threadsafe user callbacks. If a significant amount of time is spent in the callbacks then it is worth turning off the automatic MUTEX protection by setting the MUTEXCALLBACKS control to 0. It this is done then the user must ensure that their callbacks are threadsafe.
On some problems it is also possible to obtain a speedup by using multiple threads for the MIP solve process between the initial LP relaxation solve and the branch and bound search. The default behavior here is for the Optimizer to use a single thread to create its rounds of cuts and to run its heuristic methods to obtain MIP solutions. Extra threads can be started, dedicated to running the heuristics only, by setting the HEURTHREADS control. By setting HEURTHREADS to a non–zero value, the heuristics will be run in separate threads, in parallel with cutting.
When a MIP solve is terminated early, due to e.g. a time or node limit, it is possible to select between two different termination behaviors. This has implications for the determinism of callbacks called near termination and how quickly the Optimizer stops. In the default behavior, when termination is detected, all work is immediately stopped and any partial node solves are discarded. It is therefore possible that some callbacks will have been called for nodes that are discarded at termination. Note that this termination method does not affect the final state the problem is left in after termination and that any integer solution for which the preintsol and intsol callbacks are called will never be dropped. By setting the control MIPTERMINATIONMETHOD to 1, the termination behavior will be changed such that partial work is never discarded. Instead, all worker threads will be allowed to complete their current work before the solve stops. This termination behavior might cause a longer delay between termination is detected and the Optimizer stops, but it will ensure that work is never dropped for any callbacks that have already been called.
The concurrent solver
The concurrent solve is activated either by passing multiple algorithm flags to XPRSlpoptimize (e.g. "pb" for running primal and the barrier) or by setting CONCURRENTTHREADS to a positive number. The order in which threads are allocated to the algorithms is not affected by the order of the flags provided.
If algorithm flags are specified, then concurrent will run the specified algorithms, if the setting of CONCURRENTTHREADS allows for a sufficient number of threads. When no flags are specified, the automatic order of selecting algorithms starts with dual, followed by barrier and then primal. The network solver is only used if specified by flags.
CONCURRENTTHREADS represents the total target number of threads that can be used by concurrent. The optimizer will then first start dual then barrier (if CONCURRENTTHREADS > 1) followed by primal (if CONCURRENTTHREADS > 2). Any remaining threads will be allocated to parallel barrier.
If manual algorithm selection has been made using algorithm flags, then CONCURRENTTHREADS will limit the number of algorithms started (if smaller than the number of algorithms provided), in which case the number of algorithms started will be the first CONCURRENTTHREADS in the dual → barrier → primal → network order.
Once an algorithm is started, the direct thread controls BARTHREADS and DUALTHREADS are respected. Note that due to the latter controls the total number of theads may exceed CONCURRENTTHREADS.
In case a single algorithm is started and relevant controls are on automatic, the value of the THREADS control is used.
If multiple algorithms have been started and CONCURRENTTHREADS is on automatic, then THREADS will be used as the overall number of threads used in the concurrent (unless overwritten by the relevant algorithm specific control on a per-algorithm basis)
Solving Large Models (the 64 bit Functions)
The size of the models that can be loaded into the optimizer using the standard optimizer functions is limited by the largest number that can be held in a 32–bit integer. This means that it is not possible to load any problem with more than 2147483648 matrix elements with the standard optimizer functions. On 64–bit machines, it is possible to use the optimizer 64–bit functions to load problems with a larger number of elements (these functions have 64 appended to the standard optimizer function names). For example, it is possible to load a problem with a large number of elements with the XPRSloadlp64 function. The only difference between XPRSloadlp64 and XPRSloadlp is that the mstart array containing the starting points of the elements in each column that is passed to XPRSloadlp64 is a pointer to an array of 64–bit integers. Typically, the declaration and allocation of space for the 64–bit mstart array would be as follows:
XPRSint64 *mstart = malloc( ncol * sizeof( *mstart) );
The starting points of the elements in mstart can then exceed the largest 32–bit integer.
Wherever there is a need to pass a large array to an optimizer subroutine there is a corresponding 64–bit function. Once a large model has been loaded into the optimizer then all the standard optimizer functions (such as XPRSlpoptimize) can be used.
Note that although the 64–bit functions allow very large models to be loaded, there is no guarantee that such large problems can be solved in a reasonable time. Also if the machine doesn't have sufficient memory, the matrix will be constantly swapped in and out of memory which can slow the solution process considerably.
Using the Tuner
For a given optimization problem, setting suitable control parameters frequently results in improved performance such as solution time reduction. The Xpress Optimizer built-in tuner can help a user to identify such set of control settings that allows the Xpress Optimizer or Xpress SLP to solve problems faster than by using defaults.
Basic Usage
With a loaded problem, the tuner can be started by simply calling TUNE from the console, or XPRStune from a user application. The tuner will then search for better control settings from a list of controls (called the tuner method). To achieve this, the tuner will solve the problem with its default baseline control settings and then solve the problem mutiple times with each individual control and certain combinations of these controls.
As the tuner works by solving a problem mutiple times, it is important and recommended to set time limits. Setting MAXTIME will limit the effort spent on each individual solve and setting TUNERMAXTIME will limit the overall effort of the tuner.
The tuner works with LP and MIP problems. It automatically determines the problem type by examining the characteristics of the current problem. It is possible to tune a MIP problem as an LP or vice versa by passing the flag l or g to XPRStune or TUNE.
The tuner can also work with SLP and MISLP problems when Xpress Nonlinear is available. Note that for SLP or MISLP problems, the time limit is set with XSLP_MAXTIME.
The Tuner Method
A tuner method consists of a list of controls for the tuner to try with. It is possible to run the tuner with different pre-defined lists of controls, so-called factory tuner methods, or with a user-defined list of controls. When using the tuner, it will automatically choose a default factory tuner method according to the problem type. A non-default factory tuner method can be selected by setting the TUNERMETHOD control. Please refer to the documentation of TUNERMETHOD for a list of available factory tuner methods.
A tuner method can be written out using XPRStunerwritemethod. This function will create a file in XTM format, that is effectively a list of Xpress Optimizer controls, each with a set of possible settings to try in tuning. When writing out one of the factory methods, it is recommended to first select the tuner method by setting TUNERMETHOD, or to load a targeting problem, so that the tuner can write out suitable tuner methods for the respective problem types.
Users can provide their own method to the tuner by setting up an XTM file (or editing one that has been written out). This can be read into the tuner with XPRStunerreadmethod. An alternate way to load a user-defined tuner method is to set the TUNERMETHODFILE control to the file name. This will only work when no tuner method has been loaded by explicitly calling XPRStunerreadmethod. If a user-defined method is successfully loaded, the tuner will use it and not load any factory tuner method.
Please refer to Appendix The Tuner Method (.xtm) File for the format of tuner method file.
The Tuner Output
While the tuner examines various control settings, it prints a progress report to the console. At the same time, it writes out the result and individual logs to the file system.
On the console, the tuner will print a one-line summary for each finished run. When a new better control setting is identified, it will be highlighted with an asterisk (*) at the begining of its log line, and followed by details of the control setting and its log file name. The console progress logging can be switched off by disabling the OUTPUTLOG control. Please refer to Appendix The Tuner Log for a more detailed description of the tuner logging.
In the background, the tuner will output the result and individual logs to the file system. By default, all the output files will be stored in the directory tuneroutput/probname/. The root folder path can be changed by setting the TUNEROUTPUTPATH control. This is the central folder in which all subfolders for the results and logs of different problems will be stored. The subfolders themselves are automatically named using the current problem name. They can be manually given a different name by setting the TUNERSESSIONNAME control. The subfolder contains one result file in XML format, and many log files, one for each evaluated control setting. The XML result file consists of the control settings, solution results and pointers to the log files of all finished tuner runs.
The file output can be turned off completely by disabling the TUNEROUTPUT control.
The Tuner Target
A tuner target defines how to compare two finished runs with different control settings.
A common usage of the tuner is to pursue a solution time reduction, where two runs will be compared by their solution time, the faster one is considered the better. However, when both of the runs time out, it will be more meaningful to compare other attributes of the two runs, for example the final gap or the best integer solution for MIP problems.
The tuner will choose a default tuner target according to problem types. For instance, comparing the time firstly and then the gap is the default tuner target for MIP problems. A user can select a different target by setting the TUNERTARGET control. Please refer to the documentation of TUNERTARGET for a list of supported tuner targets.
Restarting the Tuner
When tuning the same problem again, the tuner will attempt to pick up results from previous tuner runs so that it can avoid testing with the same control settings again. For this, it checks whether an XML result file is available in the directory tuneroutput/probname/, see Section The Tuner Output. Reusing of the history results even works when a user changes the baseline settings or uses a different tuner method. In this case, the tuner will only pick up history results which match the new control combinations. By default, when a new control setting is evaluated, the result will be appended to the existing result file from the previous tuner session.
This feature of reusing and appending to previous results can be switched off by setting the TUNERHISTORY control. This control has the default value 2, which allows both, reusing and appending. Setting it to 1 will switch off reusing of the results, while still allowing to append new result to the XML result file. Setting it to 0 will switch off appending as well; consequently, the old result file will be overwritten. Note that all log files from previous tuner session will always be kept even if they run with identical settings. This is realized by having a time stamp and a unique number in the file name. Log files can only be removed manually.
Tuner with Multiple Threads
The tuner can work in parallel, i.e., it can run several evaluations of different control settings simultaneously. When setting TUNERTHREADS larger than 1, the tuner will start in parallel mode with the given number of threads. Setting the tuner threads won't affect the number of threads used by each individual run. However, it is natural that, when solving different control settings in parallel, each of the runs may slow down.
When using the parallel tuner, it is worth considering to set the THREADS control as well; ideally such that the product of THREADS and TUNERTHREADS is at most the number of system threads.
Tuner with Problem Permutations
For a certain problem, there may exist several "lucky" controls, that show a better performance by coincidence and not due to structural reasons. Such lucky controls will typically not work with other problems of the same type, or when a user modifies the problem slightly or updates Xpress. They can be thought of as false positives of tuning.
To address this issue, the tuner can exploit a phenomenon known as performance variability and solve the problem with multiple random permutations. When setting TUNERPERMUTE to a positive number, for each control setting, the tuner will solve the original problem and the corresponding number of permuted problems and finally aggregate their results as one. Generally, tuner results with permutations are expected to be more stable.
Tuning a Set of Problems
The tuner can tune a set of problems to search for an overall best control setting for all the problems in the set. Tuning a problem set can be started from the optimizer console with the command
tune probset problem.set, |
The tuner starts by checking all the problems defined in the problem set file. It will read in each problem to find out its type (one of LP, MIP, SLP and MISLP) and optimization direction. When there are mixed problem types, the tuner will quit with a warning message. The tuner can work with mixed optimization directions and it will treat the whole problem set as a minimization problem. For a given problem set, it is possible to force the tuner to tune the problem set as LP or MIP problems with the command
tune lpset problem.set or tune mipset problem.set |
For a problem set, the tuner works by solving each individual problem in the set for each specific combination of control settings separately. When all the problems in the set are solved for a specific control setting, the tuner combines the individual problem results into a consolidated one and reports it on the console. During the solve, for each problem in the set, the tuner will output its result and log files to a path defined by TUNEROUTPUTPATH/PROBLEMNAME. For the main problem set, the tuner will write the consolidated results to the main output path, together with a concatenated copy of all the individual problem logs.
When tuning a problem set again, the tuner can pick up the result of existing runs for the main problem set and for each separate problem in the set as well. If the full problem set can be recovered from the existing tuning records, the tuner will omit solving them as usual. Otherwise, the tuner will go through all the problems in the set. For each problem in the set, the tuner will also check whether it is possible to pick up an existing result with the specific control setting and omit solving for existing ones when possible.
Advanced Topics
Besides explicitly calling TUNE or XPRStune, the tuner can also be started by enabling the TUNERMODE control. When enabling this control (setting to 1), all the optimization such as XPRSmipoptimize or XPRSlpoptimize will be carried out as a tuned optimization. The Optimizer will first use the tuner to find the best setting and then apply the best setting to solve the problem. On the other hand, a user can disable this control (setting to 0) to always disable the tuner, such that a call to XPRStune will have no effect. This TUNERMODE has a default value of -1, which won't affect the behaviour of any of the above mentioned functions.
When using the tuner from a user application with callbacks, the callbacks will also be passed on to each individual runs. A user needs to keep in mind that these callbacks may be called mutiple times from the tuner, as the tuner will solve the problem mutiples times. Moreover, when using the parallel tuner, it is the user's responsibility to ensure that callbacks are thread-safe.
Though the tuner is built-in with the Xpress Optimizer, it can tune nonlinear problems when Xpress Nonlinear is available. Currently, parallel tuning and permutations will be disabled in this case.