Advanced Usage
Topics covered in this chapter:
- Problem Names
- Manipulating the Matrix
- Working with Presolve
- Working with LP Folding
- Working with Heuristics
- Analyzing and Handling Numerical Issues
- Common Causes of Confusion
- Using the Callbacks
- Working with the Cut Manager
- Solving Problems Using Multiple Threads
- Solving Large Models (the 64 bit Functions)
- Multi-objective Optimization
- Using the Tuner
- Remote Solving with Xpress Insight Compute Interface
Problem Names
Problems loaded in the Optimizer have a name. The name is either taken from the file name if the problem is read into the optimizer or it is specified as a string in a function call when a problem is loaded into the Optimizer using the library interface. Once loaded the name of the problem can be queried and modified. For example, the library provides the function XPRSsetprobname for changing the name of a problem.
When reading a problem from a matrix file the user can optionally specify a file extension. The search order used for matrix files in the case where the file extension is not specified is described in the reference for the function XPRSreadprob. In this case, the problem name becomes the file name, including the full path, but without the file extension.
Note that matrix files can be read directly from a gzip compressed file. Recognized names of matrix files stored with gzip compression have an extension that is one of the usual matrix file format extensions followed by the .gz extension. For example, hpw15.mps.gz.
The problem name is used as a default base name for the various file system interactions that the Optimizer may make when handling a problem. For example, when commanded to read a basis file for a problem and the basis file name is not supplied with the read basis command the Optimizer will try to open a file with the problem name appended with the .bss extension.
It is useful to note that the problem name can include file system path information. For example, c:/matrices/hpw15. Note the use of forward slashes in the Windows path string. It is recommended that Windows users use forward slashes as path delimiters in all file name specifications for the Optimizer since (i) this will work in all situations and (ii) it avoids any problems with the back slash being interpreted as the escape character.
Manipulating the Matrix
In general, the basic usage of the FICO Xpress Optimizer described in the previous chapters will be sufficient for most users' requirements. Using the Optimizer in this way simply means load the problem, solve the problem, get the results and finish.
In some cases, however, it is required that the problem is first solved, then modified, and solved again. We may want to do this, for example, if a problem was found to be infeasible. In this case, to find a feasible subset of constraints we iteratively remove some constraints and re-solve the problem. Another example is when a user wants to 'generate' columns using the optimal duals of a 'restricted' LP problem. In this case we will first need to load a problem and then we will need to add columns to this problem after it has been solved.
For library users, FICO Xpress provides a suite of functions providing read and modify access to the matrix.
Reading the Matrix
The Optimizer provides a suite of routines for read access to the optimization problem including access to the objective coefficients, constraint right hand sides, decision variable bounds and the matrix coefficients.
It is important to note that the information returned by these functions will depend on whether or not the problem has been run through an optimization algorithm or if the problem is currently being solved using an optimization algorithm, in which case the user will be calling the access routines from a callback (see section Using the Callbacks for details about callbacks). Note that the dependency on when the access routine is called is mainly due to the way "presolve" methods are applied to modify the problem. How the presolve methods affect what the user accesses through the read routines is discussed in section Working with Presolve.
The user can access the names of the problem's constraints, or 'rows', as well as the decision variables, or 'columns', using the XPRSgetnamelist routine.
The linear coefficients of the problem constraints can be read using XPRSgetrows. Note that for the cases where the user requires access to the linear matrix coefficients in the column-wise sense the Optimizer includes the XPRSgetcols function. The type of the constraint, the right hand side and the right hand side range are accessed using the functions XPRSgetrowtype, XPRSgetrhs and XPRSgetrhsrange, respectively.
The coefficients of the objective function can be accessed using the XPRSgetobj routine, for the linear coefficients, and XPRSgetqobj for the quadratic objective function coefficients. The type of a column (or decision variable) and its upper and lower bounds can be accessed using the routines XPRSgetcoltype, XPRSgetub and XPRSgetlb, respectively.
The quadratic coefficients in constraints can be accessed either in matrix form, using the XPRSgetqrowqmatrix routine, or as a list of quadratic coefficients with the XPRSgetqrowqmatrixtriplets.
Note that the reference section in Chapter Console and Library Functions of this manual provides details on the usage of these functions.
Modifying the Matrix
The Optimizer provides a set of routines for manipulating the problem data. These include a set of routines for adding and deleting problem constraints ('rows') and decision variables ('columns'). A set of routines is also provided for changing individual coefficients of the problem and for changing the types of decision variables in the problem.
Rows and columns can be added to a problem together with their linear coefficients using XPRSaddrows and XPRSaddcols, respectively. Rows and columns can be deleted using XPRSdelrows and XPRSdelcols, respectively.
The Optimizer provides a suite of routines for modifying the data for existing rows and columns. The linear matrix coefficients can be modified using XPRSchgcoef (or use XPRSchgmcoef if a batch of coefficients are to be changed). Row and column types can be changed using the routines XPRSchgrowtype and XPRSchgcoltype, respectively. Right hand sides and their ranges may be changed with XPRSchgrhs and XPRSchgrhsrange. The linear objective function coefficients may be changed with XPRSchgobj while the quadratic objective function coefficients are changed using XPRSchgqobj (or use XPRSchgmqobj if a batch of coefficients are to be changed). Likewise, quadratic coefficients in constraints are changed with XPRSchgqrowcoeff.
Examples of the usage of all the above functions and their syntax may be found in the reference section of this manual in Chapter Console and Library Functions.
Finally, it is important to note that it is not possible to modify a matrix when it has been 'presolved' and has not been subsequently 'postsolved'. The following section Working with Presolve discusses some important points concerning reading and modifying a problem that is "presolved".
Working with Presolve
The Optimizer provides a number of algorithms for simplifying a problem prior to the optimization process. This elaborate collection of procedures, known as presolve, can often greatly improve the Optimizer's performance by modifying the problem matrix, making it easier to solve. The presolve algorithms identify and remove redundant rows and columns, reducing the size of the matrix, for which reason most users will find it a helpful tool in reducing solution times. However, presolve is included as an option and can be disabled if not required by setting the PRESOLVE control to 0. Usually this is set to 1 and presolve is called by default.
For some users the presolve routines can result in confusion since a problem viewed in its presolved form will look very different to the original model. Under standard use of the Optimizer this may cause no difficulty. On a few occasions, however, if errors occur or if a user tries to access additional properties of the matrix for certain types of problem, the presolved values may be returned instead. In this section we provide a few notes on how such confusion may be best avoided. If you are unsure if the matrix is in a presolved state or not, check the PRESOLVESTATE attribute
It is important to note that when solving a problem with presolve on, the Optimizer will take a copy of the matrix and modify the copy. The original matrix is therefore preserved, but will be inaccessible to the user while the presolved problem exists. Following optimization, the whole matrix is automatically postsolved to recover a solution to the original problem and restoring the original matrix. Consequently, either before or after, but not during, a completed optimization run, the full matrix may be viewed and altered as described above, being in its original form.
A problem might be left in a presolved state if the solve was interrupted, for example due to the Ctrl-C key combination, or if a time limit (set by MAXTIME) was reached. In such a case, the matrix can always be returned to its original state by calling XPRSpostsolve (POSTSOLVE). If the matrix is already in the original state then XPRSpostsolve (POSTSOLVE) will return without doing anything.
While a problem is in a presolved state it is not possible to make any modifications to it, such as adding rows or columns. The problem must first be returned to its original state by calling XPRSpostsolve before it can be changed.
(Mixed) Integer Programming Problems
If a model contains MIP entities, integer presolve methods such as bound tightening and coefficient tightening are applied to tighten the LP relaxation. As a simple example of this might be if the matrix has a binary variable x and one of the constraints of the matrix is x≤0.2. It follows that x can be fixed at zero since it can never take the value 1. If presolve uses the MIP entities to alter the matrix in this way, then the LP relaxation is said to have been tightened. For Console users, notice of this is sent to the screen; for library users it may be sent to a callback function, or printed to the log file if one has been set up. In such circumstances, the optimal objective function value of the LP relaxation for a presolved matrix may be different from that for the unpresolved matrix.
The strict LP solution to a model with MIP entities can be obtained by calling the XPRSlpoptimize (LPOPTIMIZE) command. This removes the discrete restrictions from the variables, preventing the LP relaxation from being tightened and solves the resulting matrix. In the example above, x would not be fixed at 0, but allowed to range between 0 and 0.2.
When XPRSmipoptimize (MIPOPTIMIZE) finds an integer solution, it is postsolved and saved in memory. The solution can be read with the XPRSgetsolution function. A permanent copy can be saved to a solution file by calling XPRSwritebinsol (WRITEBINSOL), or XPRSwriteslxsol (WRITESLXSOL) for a simpler text file. This can be retrieved later by calling XPRSreadbinsol (READBINSOL) or XPRSreadslxsol (READSLXSOL), respectively.
After calling XPRSmipoptimize (MIPOPTIMIZE), the matrix will be postsolved whenever the MIP search has completed. If the MIP search hasn't completed the matrix can be postsolved by calling the XPRSpostsolve (POSTSOLVE) function.
Working with LP Folding
In addition to presolve procedures, the Optimizer provides an algorithm called LP folding that can further simplify LP problems. The LP folding is applicable to LP problems that can be partitioned into equitable partitions, and it works by aggregating matrix columns of equitable partitions and then reducing the problem size.
Solutions for the folded problem are also valid for the original problem. While it is straightforward to transfer a solution from the folded problem to the original problem, it is non-trivial to do so for the basis. When an LP problem is solved to optimality and a basis is needed, the LP unfolding will use the crossover algorithm to provide one. When the folded LP problem is unbounded or infeasible, or when the solving process is stopped due to time or iteration limit, the basis will not be available. Please note that LP folding tends to provide solutions with a larger support (number of variables that are not at any of their bounds).
LP folding is applied automatically when appropriate. It can be enabled or disabled by setting the LPFOLDING control.
Working with Heuristics
The Optimizer contains several primal heuristics that help to find feasible solutions during a tree search. These heuristics fall broadly into one of three classes:
- Simple rounding heuristics
These take the continuous relaxation solution to a node and, through simple roundings of the solution values for MIP entities, try to construct a feasible MIP solution. These are typically run on every node. - Diving heuristics
These start from the continuous relaxation solution to a node and combines rounding and fixing of MIP entities with occasional reoptimization of the continuous relaxation to construct a better quality MIP solution. They are run frequently on both the root node and during the branch and bound tree search. - Local search heuristics
The local search heuristics are generally the most expensive heuristics and involve solving one or more smaller MIPs whose feasible regions describe a neighborhood around a candidate MIP solution. These heuristics are run at the end of the root solve and typically on every 500–1000 nodes during the tree search.
Some simple heuristics and a few fast diving heuristics, which do not require a starting solution, will be tried before the initial continuous relaxation of a MIP is solved. On very simple problems, it is possible that an optimal MIP solution will be found at this point, which can lead to the initial relaxation being cut off. These heuristics can be enabled or disabled using the HEURBEFORELP control.
The most important control for steering the overall heuristic behavior of the Optimizer is HEUREMPHASIS Setting this control to 1 will trigger many additional heuristic calls. This setting typically leads to a quicker reduction of the optimality gap at the beginning of the search. However, it often comes at the expense of the running time to proven optimality increasing. Consequently, we recommend this setting for use cases that aim to find a reasonably good solution quickly but for which proving optimality is out of scope.
Setting HEUREMPHASIS to 2 will enable a very aggressive heuristic behavior. While this can be beneficial for some problems, it is on average inferior to the more balanced setting of 1. Setting HEUREMPHASIS to 0 will disable all heuristics. This replaces the deprecated HEURSTRATEGY=0 setting.
In addition to running the solver with a heuristic emphasis from the beginning, a secondary application of HEUREMPHASIS is solution polishing. Therefore, the user would alter the heuristic emphasis only when the solve comes close to the intended time limit or the progress in the optimality gap stalls. This can, e.g., happen from a callback. Note that this is most promising when the solver has already made some significant progress in tree search. Especially for problems that spend a majority of the targeted solution time in the root node, we recommend setting HEUREMPHASIS=1 from the beginning.
There are a few other controls that affect heuristics. The diving heuristics have the following controls:
- HEURFREQ
- The frequency at which to run a diving heuristic during the branch and bound tree search. If HEURFREQ=k, a diving heuristic will be applied when at least k nodes of the tree search have been solved since the last run. Set this control to zero to disable diving heuristics during the tree search. With a default setting of -1, the Optimizer will automatically select a frequency that depends on how expensive it is to run and how many integer variables need to be rounded. Typically, this results in a diving heuristic being run for every 10–50 nodes.
- HEURDIVESTRATEGY
- Can be used to select one specific out of 10 predefined diving strategies, otherwise the Optimizer will automatically select which appears to work best. Set this control to zero to disable the diving heuristic.
- HEURDIVERANDOMIZE
- How much randomization to introduce into the diving heuristics.
- HEURDIVESPEEDUP
- The amount of effort to put into the individual dives. This essentially determines how often the continuous relaxation is reoptimized during a dive.
The local search heuristics have the following controls:
- HEURSEARCHFREQ
- The frequency at which to run the local search heuristics during the branch and bound tree search. If HEURSEARCHFREQ=k, the local search heuristics will be run when at least k nodes of the tree search have been solved since the last run.
- HEURSEARCHEFFORT
- Determines the complexity of the local search MIP problems solved and, if HEURSEARCHFREQ=-1, also how often they are applied.
- HEURSEARCHROOTSELECT
- Selects which local search heuristics are allowed to be run on the root node. Each bit of this integer control represents an individual heuristic.
- HEURSEARCHTREESELECT
- Selects which local search heuristics are allowed to be run during the branch and bound tree search.
The simple rounding heuristics do not have any further controls associated with them.
Analyzing and Handling Numerical Issues
There are many optimization applications which give rise to numerically challenging models. You might notice that the Optimizer takes unexpectedly long for simplex reoptimization, that minimal changes in the models lead to an unexpectedly large change in the optimal solution value or that the optimal solution shows a certain amount of violation in the postsolved state. The Optimizer provides various tools to analyze whether a model is numerically challenging and to handle numerical issues when they occur.
Analyzing Models for Numerical Issues
There are two main reasons which can make models numerically challenging: Firstly, using coefficients that span many orders of magnitude, e.g., using numbers as large as 100 million mixed with numbers as small as 1 over 100 million. Those span 16 orders of magnitude. A double-precision floating point number, however, can only represent 15 precise digits. Thus, round-off errors are inevitable. Secondly, if a model contains structures that amplify the effect of numeric error propagation, e.g., when the result of subtracting two almost identical values is scaled up and then used for further computations.
To both ends, Xpress provides features to analyze models for numerical stability. Addressing the first issue, Xpress provides the user with information on the coefficient ranges in both the original problem and the problem that is solved after presolving and scaling has been applied. In the log, the minimum and maximum absolute values of the matrix coefficients, the right-hand side/bounds and the objective are printed. The relevant part for the numerical behavior of the solution process are the coefficient ranges in the solved model. The difference between the exponents of the min and max values tells you how many orders of magnitude are covered. As a rule of thumb, those should not be more than nine (and not more than six in an individual row or column of the original matrix). For MIP solves, Xpress will notify the user after the solution of the root LP when the coefficient ranges and other stability measures indicate that the solve might become numerically cumbersome. In such a case, it will print a warning "High attention level predicted from matrix features" to the log.
The second issue, error propagation, is a bit trickier to trace. The most important source to consider for this is the multiplication of a vector with the constraint matrix, which gets stored in a factorized fashion. Hence, it makes sense to consider the condition number of the basis inverse matrix. Computing this can be expensive and is hence not done by default. You can activate it by setting the MIPKAPPAFREQ control to one. When setting this control, you will get a final statistic report that summarizes the condition numbers collected during search. Besides the percentage of stable, suspicious, unstable, and ill-posed basis inverse matrices, the Optimizer will report a quantity called the attention level after the solve. The attention level takes values between zero and one. It is equal to zero if all basis inverse matrices are stable, and one if all basis inverse matrices are ill-posed. The higher the attention level, the more likely are numerical errors. As a rule of thumb, matrices with an attention level larger than 0.1 should be investigated further. The attention level is available as an attribute: ATTENTIONLEVEL.
After having solved the root LP relaxation of a MIP solve, the Optimizer applies a Machine Learning model to predict the attention level of the current MIP solve. If the prediction is larger than 0.1, it will print a message to the log: "High attention level predicted from matrix features".
The predicted attention level is available as an attribute: PREDICTEDATTLEVEL.Finally, if the Optimizer undergoes numerical failures during the optimization process, it will report these at the end of the solve. If you see dual, primal or barrier failures, or single inverts being reported, it might be worthwhile to try some of the methods described in the following sections.
Scaling
Scaling is a widely used preconditioning technique that aims at reducing the condition number of the constraint matrix, at reducing error propagation, and at reducing the number of LP iterations required to solve the problem. In Xpress, both columns and rows are scaled, however, only by powers of 2 to avoid round-off errors. By default, Xpress applies a machine learning algorithm to choose a scaling variant that is predicted to give the most stable performance. Although this prediction is correct in most of the cases, one can try the opposite setting, i.e., setting SCALING to 163 when autoscaling selected Curtis-Reid scaling and setting scaling to 16 when autoscaling selected standard scaling. Furthermore, disabling special handling of big-M rows and conduction scaling before presolving, represented by bits 6 and 9 of the SCALING control, is useful for some problems.
Solution Refinement
The Optimizer offers two methods of refining solutions, both are independent and complement each other. The first is called LP Refinement and aims at providing LP solutions of a higher precision, i.e., with more significant bits. It consists of two parts. Standard LP Refinement iteratively attempts to increase the accuracy of the solution until either both FEASTOLTARGET and OPTIMALITYTOLTARGET are satisfied, or accuracy cannot further be increased, or some effort limit is exhausted. It is applied by default both to LP solutions and to MIP solutions. Iterative refinement has the same goal, but uses more expensive, but also more promising measures of doing so, e.g., quad precision computing. If the postsolved LP solution is slightly infeasible, setting bits 5 and 6 of the REFINEOPS control aims at reducing those infeasibilities.
The second refinement scheme is called MIP Refinement and aims at providing MIP solutions which are truly integral and will not lead to infeasibilities when fixing integer variables in the original space. Note that both Iterative Refinement and MIP Refinement can lead to a slowdown of the solution process which is more considerable the more numerically challenging the matrix is.
Other Ways to Handle Numerical Issues
In addition to the methods named above, the Optimizer gives the user the possibility to change the numerical tolerances, such as FEASTOL and MATRIXTOL, but caution is advised here. Finally, if the numerical issues mainly come from the behavior of the simplex algorithm, setting DUALSTRATEGY to values 7 or 32 might help, or even using only barrier for solving LPs during a MIP solve, achieved by changing DEFAULTALG to 4.
In any case, it is best practice to reconsider the model. If you have very small and/or very large values in there — are those really necessary? Or could they be adapted to some significantly more stable value while still representing the same logic? Can you determine places where large values might cancel each other out and the residual is used for further computations? Have you tried using indicator instead of big-M formulations?
Common Causes of Confusion
It should be noted that most of the library routines described above and in chapter Console and Library Functions, which modify the matrix will not work on a presolved matrix. The only exception is inside a callback for a MIP solve, where cuts may be added or variable bounds tightened (using XPRSchgbounds). Any of these functions expect references to the presolved problem. If one tries to retrieve rows, columns, bounds or the number of these, such information will come from the presolved matrix and not the original. A few functions exist which are specifically designed to work with presolved and scaled matrices, although care should be exercised in using them. Examples of these include the commands XPRSgetpresolvesol, XPRSgetpresolvebasis, XPRSgetscaledinfeas, XPRSloadpresolvebasis and XPRSloadpresolvedirs.
Using the Callbacks
Console users are constantly provided with information on the standard output device by the Optimizer as it searches for a solution to the current problem. The same output is also available to library users if a log file has been set up using XPRSsetlogfile. However, whilst Console users can respond to this information as it is produced and allow it to influence their session, the same is not immediately true for library users, since their program must be written and compiled before the session is initiated. For such users, a more interactive alternative to the above forms of output is provided by the use of callback functions.
The library callbacks are a collection of functions which allow user-defined routines to be specified to the Optimizer. In this way, users may define their own routines which should be called at various stages during the optimization process, prompting the Optimizer to return to the user's program before continuing with the solution algorithm. Perhaps the three most general of the callback functions are those associated with the search for an LP solution. However, the vast majority of situations in which such routines might be called are associated with the tree search, and will be addressed below.
Output Callbacks
Instead of catching the standard output from the Optimizer and saving it to a log file, the callback XPRSaddcbmessage allows the user to define a routine which should be called every time a text line is output by the Optimizer. Since this returns the status of each message output, the user's routine could test for error or warning messages and take appropriate action accordingly.
LP Callbacks
The functions XPRSaddcblplog and XPRSaddcbbarlog allow the user to respond after each iteration of either the simplex or barrier algorithms, respectively. The controls LPLOG and BAROUTPUT may additionally be set to reduce the frequency at which these routines should be called.
MIP Tree Search Callbacks
When a problem with MIP entities is to be optimized, a large number of subproblems, called nodes, must typically be solved as part of the branch and bound tree search. At various points in this process user-defined routines can be called, depending on the callback that is used to specify the routine to the Optimizer.
In a branch and bound tree search, the Optimizer starts by selecting an active node amongst all candidates (known as a full backtrack) and then proceed with solving it, which can lead to new descendent nodes being created. If there is a descendent node, the optimizer will by default select one of these next to solve and repeat this iterative descend while new descendent nodes are being created. This dive stops when it reaches a node that is found to be infeasible or cutoff, at which point the Optimizer will perform a full backtrack again and repeat the process with a new active node.
A routine may be called whenever a node is selected by the optimizer during a full backtrack, using XPRSaddcbchgnode. This will also allow a user to directly select the active node for the optimizer. Whenever a new node is created, a routine set by XPRSaddcbnewnode will be called, which can be used to record the identifier of the new node, e.g. for use with XPRSaddcbchgnode.
When the Optimizer solves a new node, it will first call any routine set by XPRSaddcbprenode, which can be used to e.g. tighten bounds on columns (with XPRSchgbounds) as part of a user node presolve. Afterwards, the LP relaxation of the node problem is solved to obtain a feasible solution and a best bound for the node. This might be followed by one or more rounds of cuts. If the node problem is found to be infeasible or cutoff during this process, a routine set by XPRSaddcbinfnode will be called. Otherwise, a routine set by XPRSaddcboptnode will be called to let the user know that the optimizer now has an optimal solution to the LP relaxation of the node problem. In this routine, the user is allowed to add cuts (see section Working with the Cut Manager) and tighten bounds to tighten the node problem, or apply branching objects (see XPRS_bo_create) to separate on the current node problem. If the user modifies the problem inside this optnode callback routine, the optimizer will automatically resolve the node LP and, if the LP is still feasible, call the optnode routine again.
If the LP relaxation solution to the node problem also satisfies all MIP entities and the user has not added any branching objects, i.e., if it is a MIP solution, the Optimizer will call a routine set by XPRSaddcbpreintsol before saving the new solution, and call a routine set by XPRSaddcbintsol after saving the solution. These two routines will also be called whenever a new MIP solution is found using one of the Optimizer heuristics.
Otherwise, if the node LP solution does not satisfy the MIP entities (or any user branching objects), the Optimizer will proceed with branching. After the optimizer has selected the candidate entity for branching, a routine set by XPRSaddcbchgbranch will be called, which also allows a user to change the selected candidate. If, during the candidate evaluation the optimizer discovers that e.g. bounds can be tightened, it will tighten the node problem and go back to resolving the node LP, followed by the callback routines explained above.
When the Optimizer finds a better MIP solution, it is possible that some of the nodes in the active nodes pool are cut off due to having an LP solution bound that is worse than the new cutoff value. For such nodes, a routine set by XPRSaddcbnodecutoff will be called and the node will be dropped from the active nodes pool.
The final MIP-related callback, XPRSaddcbmiplog, is more similar to the LP search callbacks, allowing a user's routine to be called whenever a line of the MIP log is printed. The frequency with which this occurs is set by the control MIPLOG.
Working with the Cut Manager
Cuts and the Cut Pool
Solving the LP relaxations during a branch and bound tree search is often made more efficient by supplying additional rows (constraints) to the matrix which reduce the size of the feasible region, whilst ensuring that it still contains an optimal integer solution. Such additional rows are called cutting planes, or cuts.
By default, cuts are automatically added to the matrix by the Optimizer during a tree search to speed up the solution process. However, for advanced users, the Optimizer library provides greater freedom, allowing the possibility of choosing which cuts are to be added at particular nodes, or removing cuts entirely. The cutting planes themselves are held in a cut pool, which may be manipulated using library functions.
Cuts may be added directly to the matrix at a particular node, or may be stored in the cut pool first before subsequently being loaded into the matrix. It often makes little difference which of these two approaches is adopted, although as a general rule if cuts are cheap to generate, it may be preferable to add the cuts directly to the matrix and delete any redundant cuts after each sub-problem (node) has been optimized. Any cuts added to the matrix at a node and not deleted at that node will automatically be added to the cut pool. If you wish to save all the cuts that are generated, it is better to add the cuts to the cut pool first. Cuts can then be loaded into the matrix from the cut pool. This approach has the advantage that the cut pool routines can be used to identify duplicate cuts and save only the stronger cuts.
To help track the cuts that have been added to the matrix at different nodes, the cuts can be classified according to a user-defined cut type. The cut type can either be a number such as the node number or it can be a bit map. In the latter case each bit of the cut type may be used to indicate a property of the cut. For example, cuts could be classified as local cuts applicable at the current node and its descendants, or as global cuts applicable at all nodes. If the first bit of the cut type is set this could indicate a local cut and if the second bit is set this could indicate a global cut. Other bits of the cut type could then be used to signify other properties of the cuts. The advantage of using bit maps is that all cuts with a particular property can easily be selected, for example all local cuts.
Cut Management Routines
Cuts may be added directly into the matrix at the current node using XPRSaddcuts. Any cuts added to the matrix at a node will be automatically added to the cut pool and hence restored at descendant nodes unless specifically deleted at that node, using XPRSdelcuts. Cuts may be deleted from a parent node which have been automatically restored, as well as those added to the current node using XPRSaddcuts, or loaded from the cut pool using XPRSloadcuts.
It is recommended to delete only those cuts with basic slacks. Otherwise, the basis will no longer be valid and it may take many iterations to recover an optimal basis. If the second argument to XPRSdelcuts is set to 1, this will ensure that cuts with non-basic slacks will not be deleted, even if the other controls specify that they should be. It is highly recommended that this is always set to 1.
Cuts may be saved directly to the cut pool using the function XPRSstorecuts. Since cuts added to the cut pool are not automatically added to the matrix at the current node, any such cut must be explicitly loaded into the matrix using XPRSloadcuts before it can become active. If the third argument of XPRSstorecuts is set to 1, the cut pool will be checked for duplicate cuts with a cut type identical to the cuts being added. If a duplicate cut is found, the new cut will only be added if its right hand side value makes the cut stronger. If the cut in the cut pool is weaker than the added cut, it will be removed unless it has already been applied to active nodes of the tree. If, instead, this argument is set to 2, the same test is carried out on all cuts, ignoring the cut type. The routine XPRSdelcpcuts allows the user to remove cuts from the cut pool, unless they have already been applied to active nodes in the branch and bound tree.
A list of cuts in the cut pool may be obtained using the command XPRSgetcpcuts, whilst XPRSgetcpcutlist returns a list of their indices. A list of those cuts which are active at the current node is returned using XPRSgetcutlist.
User Cut Manager Routines
Users may also write their own cut manager routines to be called during the branch and bound search. Cuts can be added or removed on any node of the branch and bound search using a callback function set by the routine XPRSaddcboptnode (see section MIP Tree Search Callbacks).
Further details of these functions may be found in chapter Console and Library Functions within the functional reference which follows.
Solving Problems Using Multiple Threads
It is possible to use multiple processors when solving any type of problem with the Optimizer. On the more common processor types, such as those from Intel or AMD, the Optimizer will detect how many logical processors are available in the system and attempt to solve the problem in parallel using as many threads as possible. The number detected can be read through the CORESDETECTED integer attribute. It is also possible to adjust the number of threads to use by setting the integer parameter THREADS.
By default a problem will be solved deterministically, in the sense that the same solution path will be followed each time the problem is solved when given the same number of threads. For an LP this means that the number of iterations and the optimal, feasible solution returned will always be the same.
When solving a MIP deterministically, each node of the branch-and-bound tree will always be solved the same. Each node of the branch-and-bound tree can be identified by a unique number, available through the attribute CURRENTNODE. The tree will always have the same parent/child relationship in terms of these identifiers. A deterministic MIP solve will always find integer solutions on the same nodes and the attributes and solutions on a node will always be returned the same from one run to another. Since nodes will be solved in parallel the order in which nodes are solved can vary. There is an overhead in synchronizing the threads to make the parallel runs deterministic and it can be faster to run in non-deterministic mode. This can be done by setting the DETERMINISTIC control to 0.
For an LP problem (or the initial continuous relaxation of a MIP), there are several choices of parallelism. The Newton barrier, the hybrid gradient and the dual simplex algorithms support multiple threads. The number of threads to use can be set with BARTHREADS or DUALTHREADS, respectively. It is also possible to run some or all of primal simplex, dual simplex and the Newton barrier or hybrid gradient algorithm side-by-side in separate threads, known as a concurrent LP solve. (The Newton barrier and the hybrid gradient algorithm cannot be run concurrently.) This can be useful when none of the methods is the obvious choice. In this mode, the Optimizer will stop with the first algorithm to solve the problem. The number of threads for the concurrent LP solve can be set using CONCURRENTTHREADS. The algorithms to use for the concurrent solve can be specified by concatenating the required "d", "p", "n" and "b" flags when calling XPRSlpoptimize (LPOPTIMIZE) or XPRSmipoptimize (MIPOPTIMIZE); please refer to section The concurrent solver for more details.
When solving a MIP problem, the Optimizer will try to run the branch and bound tree search in parallel. Use the MIPTHREADS control to set the number of threads specifically for the tree search.
The operation of the optimizer for MIPs is fairly similar in serial and parallel mode. The MIP callbacks can still be used in parallel and callbacks are called when each MIP worker problem is created and destroyed. The mipthread callback (declared with XPRSaddcbmipthread) is called whenever a MIP worker problem is created and the callback declared with XPRSaddcbdestroymt is called whenever the worker problem is destroyed. Each worker problem has a unique ID which can be obtained from the MIPTHREADID integer attribute. When an executing thread solves a branch-and-bound node, it will also do so on a worker problem assigned to it. Note that a given worker problem can be assigned to different threads during its lifetime and the threads might differ from one run to another.
When the MIP callbacks are called they are MUTEX protected to allow non threadsafe user callbacks. If a significant amount of time is spent in the callbacks then it is worth turning off the automatic MUTEX protection by setting the MUTEXCALLBACKS control to 0. It this is done then the user must ensure that their callbacks are threadsafe.
On some problems it is also possible to obtain a speedup by using multiple threads for the MIP solve process between the initial LP relaxation solve and the branch and bound search. The default behavior here is for the Optimizer to use a single thread to create its rounds of cuts and to run its heuristic methods to obtain MIP solutions. Extra threads can be started, dedicated to running the heuristics only, by setting the HEURTHREADS control. By setting HEURTHREADS to a non-zero value, the heuristics will be run in separate threads, in parallel with cutting.
When a MIP solve is terminated early, due to e.g. a time or node limit, it is possible to select between two different termination behaviors. This has implications for the determinism of callbacks called near termination and how quickly the Optimizer stops. In the default behavior, when termination is detected, all work is immediately stopped and any partial node solves are discarded. It is therefore possible that some callbacks will have been called for nodes that are discarded at termination. Note that this termination method does not affect the final state the problem is left in after termination and that any integer solution for which the preintsol and intsol callbacks are called will never be dropped. By setting the control MIPTERMINATIONMETHOD to 1, the termination behavior will be changed such that partial work is never discarded. Instead, all worker threads will be allowed to complete their current work before the solve stops. This termination behavior might cause a longer delay between termination is detected and the Optimizer stops, but it will ensure that work is never dropped for any callbacks that have already been called.
The concurrent solver
The concurrent solve is activated either by passing multiple algorithm flags to XPRSlpoptimize (e.g. "pb" for running primal and the barrier) or by setting CONCURRENTTHREADS to a positive number. The order in which threads are allocated to the algorithms is not affected by the order of the flags provided.
Instead of passing flags, you can alternatively set the LPFLAGS control.
To activate the hybrid gradient method you can use the "b" flag and set BARALG to 4. This will start the hybrid gradient method instead of the Newton barrier algorithm.
If algorithm flags are specified or the LPFLAGS control is set, then concurrent will run the specified algorithms, provided that the setting of CONCURRENTTHREADS allows for a sufficient number of threads. When no flags are specified, the automatic order of selecting algorithms starts with dual, followed by barrier. Primal simplex will be added as a third solver only in a minority of cases, mainly when the problem has much more columns than rows. The network solver is only used if specified by flags.
CONCURRENTTHREADS represents the total target number of threads that can be used by concurrent. The optimizer will then first start dual then barrier (if CONCURRENTTHREADS > 1), in some cases, see above, followed by primal (if CONCURRENTTHREADS > 2). Any remaining threads will be allocated to parallel barrier.
If manual algorithm selection has been made using algorithm flags, then CONCURRENTTHREADS will limit the number of algorithms started (if smaller than the number of algorithms provided), in which case the number of algorithms started will be the first CONCURRENTTHREADS in the dual → barrier or hybrid gradient → primal → network order.
Once an algorithm is started, the direct thread controls BARTHREADS and DUALTHREADS are respected. Note that due to the latter controls the total number of theads may exceed CONCURRENTTHREADS.
In case a single algorithm is started and relevant controls are on automatic, the value of the THREADS control is used.
If multiple algorithms have been started and CONCURRENTTHREADS is on automatic, then THREADS will be used as the overall number of threads used in concurrent (unless overwritten by the relevant algorithm specific control on a per-algorithm basis)
.Solving Large Models (the 64 bit Functions)
The size of the models that can be loaded into the optimizer using the standard optimizer functions is limited by the largest number that can be held in a 32-bit integer. This means that it is not possible to load any problem with more than 2147483648 matrix elements with the standard optimizer functions. On 64-bit machines, it is possible to use the optimizer 64-bit functions to load problems with a larger number of elements (these functions have 64 appended to the standard optimizer function names). For example, it is possible to load a problem with a large number of elements with the XPRSloadlp64 function. The only difference between XPRSloadlp64 and XPRSloadlp is that the mstart array containing the starting points of the elements in each column that is passed to XPRSloadlp64 is a pointer to an array of 64-bit integers. Typically, the declaration and allocation of space for the 64-bit mstart array would be as follows:
XPRSint64 *mstart = malloc( ncol * sizeof( *mstart) );
The starting points of the elements in mstart can then exceed the largest 32-bit integer.
Wherever there is a need to pass a large array to an optimizer subroutine there is a corresponding 64-bit function. Once a large model has been loaded into the optimizer then all the standard optimizer functions (such as XPRSlpoptimize) can be used.
Note that although the 64-bit functions allow very large models to be loaded, there is no guarantee that such large problems can be solved in a reasonable time. Also if the machine doesn't have sufficient memory, the matrix will be constantly swapped in and out of memory which can slow the solution process considerably.
Multi-objective Optimization
Xpress supports problems with more than one objective function. Each objective function has a priority and a weight. Xpress will solve the problem once for each distinct objective priority that is defined, optimizing in each iteration a linear combination of the objective functions with the same priority. In the second and subsequent iterations, all objectives from previous iterations are fixed to their optimal values within some tolerance. Xpress supports a blended approach, where all objectives have equal priority but different weights; a lexicographic approach, where each objective has a different priority (and unit weight); and a hybrid approach, where several iterations of the problem are solved, each iteration consisting of solving a linear combination of several objectives.
A problem always has at least one objective, accessed with XPRSgetobj and XPRSchgobj. Additional objectives are added using XPRSaddobj, updated using XPRSchgobjn, queried using XPRSgetobjn and deleted using XPRSdelobj. Adding or deleting an objective updates the OBJECTIVES attribute. Objectives can be assigned names using XPRSaddnames. Calling XPRSgetobjn / XPRSchgobjn with an objective index of zero is equivalent to calling XPRSgetobj / XPRSchgobj.
Each objective has several controls which modify its behaviour. These controls can be modified using XPRSsetobjintcontrol and XPRSsetobjdblcontrol, and queried using XPRSgetobjintcontrol and XPRSgetobjdblcontrol.
Name | Type | Default value | Meaning |
---|---|---|---|
XPRS_OBJECTIVE_PRIORITY | integer | 0 | The priority of the objective. Objectives with higher priorities are solved before objectives with lower priorities. |
XPRS_OBJECTIVE_WEIGHT | double | 1 | The weight of the objective when blending it with other objectives with the same priority. |
XPRS_OBJECTIVE_ABSTOL | double | 0.001 | The absolute tolerance used when fixing the objective to its optimal value. |
XPRS_OBJECTIVE_RELTOL | double | 0.001 | The relative tolerance used when fixing the objective to its optimal value. |
XPRS_OBJECTIVE_RHS | double | 0 | The constant part of the objective. |
The sense of the problem as defined by OBJSENSE holds for all objectives, but the sense of each objective can be reversed by assigning a negative weight. The absolute and relative tolerances are added together when computing the value to which a solved objective will be fixed in the next iteration. For minimization problems:
objective <= optimal_value * (1 + reltol) + abstol
For maximization problems:
objective >= optimal_value * (1 - reltol) - abstol
Note that when reduced cost fixing is used (the default when solving LPs), the tolerances have a different meaning, as discussed below.
A multi-objective problem may only contain linear objective functions. The problem itself may be any kind supported by Xpress, but any nonlinear objective terms must be modeled using a transfer variable. The problem is solved by calling the appropriate function, depending on the problem type: XPRSlpoptimize, XPRSmipoptimize, XPRSnlpoptimize or XPRSoptimize.
After solving a multi-objective problem, SOLVEDOBJS is set to the number of solves that were performed. If all objectives are solved successfully, this will be equal to the number of distinct priorities assigned to objectives, excluding objectives with weight equal to zero (which are considered to be disabled). If problems were encountered, such as infeasibility due to numerical issues, then SOLVEDOBJS is set to the number of solves that were attempted, including the failed solve. The order in which the objectives are solved can be calculated as follows:
- Remove all objectives with zero weight;
- sort all remaining objectives by priority in decreasing order;
- for each distinct priority level, combine all objectives with that priority using their weights.
The attributes associated with each solve can be queried using XPRSgetobjintattrib and XPRSgetobjdblattrib. For example, XPRSgetobjintattrib can be used with MIPSOLS to query the number of MIP solutions that were found when optimizing each objective. The final value of each objective function can be queried using XPRScalcobjn.
Two additional callbacks are available during a multi-objective solve: the beforeobjective callback is fired before each optimization, and the afterobjective is fired after each optimization. See XPRSaddcbbeforeobjective and XPRSaddcbafterobjective for more information.
For all problem types except LPs, each objective is fixed to its optimal value in subsequent iterations by adding an additional row during presolve. For LPs, the default behaviour is to instead fix all variables whose reduced costs exceed the absolute tolerance associated with the objective. In this case, the relative tolerance is ignored. Reduced cost fixing can be disabled using MULTIOBJOPS.
Multi-objective problems can be loaded from MPS and LP files. See ROWS section and Multiple objectives for more information.
Using the Tuner
For a given optimization problem, setting suitable control parameters frequently results in improved performance such as solution time reduction. The Xpress Optimizer built-in tuner can help a user to identify such set of control settings that allows the Xpress Optimizer, Xpress SLP, or Xpress Global to solve problems faster than by using defaults.
For instances that should be tuned for heuristic performance, we recommend trying HEUREMPHASIS=1 first. This setting addresses instances that will likely not solve to proven optimality within a given time limit. It aims at reducing the primal-dual gap in an early stage of the phase, primarily by running more aggressive heuristics.
Basic Usage
With a loaded problem, the tuner can be started by simply calling TUNE from the console, or XPRStune from a user application. The tuner will then search for better control settings from a list of controls (called the tuner method). To achieve this, the tuner will solve the problem with its default baseline control settings and then solve the problem multiple times with each individual control and certain combinations of these controls.
As the tuner works by solving a problem mutiple times, it is important and recommended to set time limits. Setting MAXTIME will limit the effort spent on each individual solve and setting TUNERMAXTIME will limit the overall effort of the tuner.
The tuner works with LP and MIP problems. It automatically determines the problem type by examining the characteristics of the current problem. It is possible to tune a MIP problem as an LP or vice versa by passing the flag l or g to XPRStune or TUNE.
The tuner can also work with nonlinear problems when Xpress Nonlinear or Xpress Global is available. For tuning a problem with Xpress Nonlinear, you need to either set the NLPSOLVER control to XSLP_NLPSOLVER_LOCAL or pass the flag s to XPRStune or TUNE.
The Tuner Method
A tuner method consists of a list of controls for the tuner to try with. It is possible to run the tuner with different pre-defined lists of controls, so-called factory tuner methods, or with a user-defined list of controls. When using the tuner, it will automatically choose a default factory tuner method according to the problem type. A non-default factory tuner method can be selected by setting the TUNERMETHOD control. There are several choices available for factory tuner methods, among them:
- A simple MIP method, which only features a few controls and can be used in situations where tuning with the default method would take too long, e.g., because the instance to be tuned takes a long time for each individual solve
- A comprehensive MIP method, which features a larger list of controls (and control settings) and can be used when individual instance solves are relatively fast or the default method could not reveal a satisfying improvement
- A root-focus method, which only considers controls that affect the root node processing of the MIP solve. It can either be used when root and tree behavior should be tuned independently, in a two stage process, or when it is evident that improvements have to come from root node processing. When tuning with a root-focus, it might make sense to choose minimizing the primal dual integral as a tuner target.
- A tree-focus method, which only considers controls that affect the tree search behavior of the MIP solve. It can either be used when root and tree behavior should be tuned independently, in a two stage process, or when it is evident that improvements have to come from the tree search, e.g., because the dual bound needs better branching.
- A method for tuning primal heuristics, which should be used when finding a better MIP solutions is the only focus and improving the best bound can be neglected. In this case, it might make sense to choose improvement of the primal bound also as a tuner target.
- Methods which specifically focus on SLP, MISLP, or global (MI)NLP solves. Those will be selected automatically for nonlinear problems when Xpress Nonlinear or Xpress Global are available.
Please also refer to the documentation of the TUNERMETHOD control.
A tuner method can be written out using XPRStunerwritemethod. This function will create a file in XTM format, that is effectively a list of Xpress Optimizer controls, each with a set of possible settings to try in tuning. When writing out one of the factory methods, it is recommended to first select the tuner method by setting TUNERMETHOD, or to load a targeting problem, so that the tuner can write out suitable tuner methods for the respective problem types.
Users can provide their own method to the tuner by setting up an XTM file (or editing one that has been written out). This can be read into the tuner with XPRStunerreadmethod. An alternate way to load a user-defined tuner method is to set the TUNERMETHODFILE control to the file name. This will only work when no tuner method has been loaded by explicitly calling XPRStunerreadmethod. If a user-defined method is successfully loaded, the tuner will use it and not load any factory tuner method.
Please refer to Appendix The Tuner Method (.xtm) File for the format of tuner method file.
The Tuner Output
While the tuner examines various control settings, it prints a progress report to the console. At the same time, it writes out the result and individual logs to the file system.
On the console, the tuner will print a one-line summary for each finished run. When a new better control setting is identified, it will be highlighted with an asterisk (*) at the begining of its log line, and followed by details of the control setting and its log file name. The console progress logging can be switched off by disabling the OUTPUTLOG control. Please refer to Appendix The Tuner Log for a more detailed description of the tuner logging.
In the background, the tuner will output the result and individual logs to the file system. By default, all the output files will be stored in the directory tuneroutput/probname/. The root folder path can be changed by setting the TUNEROUTPUTPATH control. This is the central folder in which all subfolders for the results and logs of different problems will be stored. The subfolders themselves are automatically named using the current problem name. They can be manually given a different name by setting the TUNERSESSIONNAME control. The subfolder contains one result file in XML format, and many log files, one for each evaluated control setting. The XML result file consists of the control settings, solution results and pointers to the log files of all finished tuner runs.
The file output can be turned off completely by disabling the TUNEROUTPUT control.
The Tuner Target
A tuner target defines how to compare two finished runs with different control settings.
A common usage of the tuner is to pursue a solution time reduction, where two runs will be compared by their solution time, the faster one is considered the better. However, when both of the runs time out, it will be more meaningful to compare other attributes of the two runs, for example the final gap or the best integer solution for MIP problems.
The tuner will choose a default tuner target according to problem types. For instance, comparing the time firstly and then the gap is the default tuner target for MIP problems. A user can select a different target by setting the TUNERTARGET control. Please refer to the documentation of TUNERTARGET for a list of supported tuner targets.
Restarting the Tuner
When tuning the same problem again, the tuner will attempt to pick up results from previous tuner runs so that it can avoid testing with the same control settings again. For this, it checks whether an XML result file is available in the directory tuneroutput/probname/, see Section The Tuner Output. Reusing of the history results even works when a user changes the baseline settings or uses a different tuner method. In this case, the tuner will only pick up history results which match the new control combinations. By default, when a new control setting is evaluated, the result will be appended to the existing result file from the previous tuner session.
This feature of reusing and appending to previous results can be switched off by setting the TUNERHISTORY control. This control has the default value 2, which allows both, reusing and appending. Setting it to 1 will switch off reusing of the results, while still allowing to append new result to the XML result file. Setting it to 0 will switch off appending as well; consequently, the old result file will be overwritten. Note that all log files from previous tuner session will always be kept even if they run with identical settings. This is realized by having a time stamp and a unique number in the file name. Log files can only be removed manually.
Tuner with Multiple Threads
The tuner can work in parallel, i.e., it can run several evaluations of different control settings simultaneously. When setting TUNERTHREADS larger than 1, the tuner will start in parallel mode with the given number of threads. Setting the tuner threads won't affect the number of threads used by each individual run. However, it is natural that, when solving different control settings in parallel, each of the runs may slow down.
When using the parallel tuner, it is worth considering to set the THREADS control as well; ideally such that the product of THREADS and TUNERTHREADS is at most the number of system threads.
Tuner with Problem Permutations
For a certain problem, there may exist several "lucky" controls, that show a better performance by coincidence and not due to structural reasons. Such lucky controls will typically not work with other problems of the same type, or when a user modifies the problem slightly or updates Xpress. They can be thought of as false positives of tuning.
To address this issue, the tuner can exploit a phenomenon known as performance variability and solve the problem with multiple random permutations. When setting TUNERPERMUTE to a positive number, for each control setting, the tuner will solve the original problem and the corresponding number of permuted problems and finally aggregate their results as one. Generally, tuner results with permutations are expected to be more stable.
Tuning a Set of Problems
The tuner can tune a set of problems to search for an overall best control setting for all the problems in the set. Tuning a problem set can be started from the optimizer console with the command
tune probset problem.set, |
The tuner starts by checking all the problems defined in the problem set file. It will read in each problem to find out its type (one of LP, MIP, SLP, MISLP or Global) and optimization direction. When there are mixed problem types, the tuner will quit with a warning message. The tuner can work with mixed optimization directions and it will treat the whole problem set as a minimization problem. For a given problem set, it is possible to force the tuner to tune the problem set as LP or MIP problems with the command
tune lpset problem.set or tune mipset problem.set |
For a problem set, the tuner works by solving each individual problem in the set for each specific combination of control settings separately. When all the problems in the set are solved for a specific control setting, the tuner combines the individual problem results into a consolidated one and reports it on the console. During the solve, for each problem in the set, the tuner will output its result and log files to a path defined by TUNEROUTPUTPATH/PROBLEMNAME. For the main problem set, the tuner will write the consolidated results to the main output path, together with a concatenated copy of all the individual problem logs.
When tuning a problem set again, the tuner can pick up the result of existing runs for the main problem set and for each separate problem in the set as well. If the full problem set can be recovered from the existing tuning records, the tuner will omit solving them as usual. Otherwise, the tuner will go through all the problems in the set. For each problem in the set, the tuner will also check whether it is possible to pick up an existing result with the specific control setting and omit solving for existing ones when possible.
Advanced Topics
Besides explicitly calling TUNE or XPRStune, the tuner can also be started by enabling the TUNERMODE control. When enabling this control (setting to 1), all the optimization such as XPRSmipoptimize or XPRSlpoptimize will be carried out as a tuned optimization. The Optimizer will first use the tuner to find the best setting and then apply the best setting to solve the problem. On the other hand, a user can disable this control (setting to 0) to always disable the tuner, such that a call to XPRStune will have no effect. This TUNERMODE has a default value of -1, which won't affect the behaviour of any of the above mentioned functions.
When using the tuner from a user application with callbacks, the callbacks will also be passed on to each individual runs. A user needs to keep in mind that these callbacks may be called mutiple times from the tuner, as the tuner will solve the problem mutiples times. Moreover, when using the parallel tuner, it is the user's responsibility to ensure that callbacks are thread-safe.
Though the tuner is built-in with the Xpress Optimizer, it can tune nonlinear problems when Xpress Nonlinear or Xpress Global is available. Currently, parallel tuning and permutations will be disabled in this case.
Remote Solving with Xpress Insight Compute Interface
The Xpress Optimizer libraries can be configured to outsource optimization computation to a remote Insight server that supports the Compute Interface. Software applications which depend on the Optimizer libraries for optimization computation therefore inherit the ability to transparently send jobs to Insight. This includes the Xpress applications (Optimizer Console, Mosel, Workbench).
When a solve is started, the Optimizer library directs any operations that can be solved remotely to the remote server. Some features such as callbacks, multi-start, and the solution enumerator have restrictions applied which are documented here.
To integrate Xpress with an Insight Compute Server you must provide some configuration. Please see Chapter 3 of the Xpress Insight Compute Interface guide here: (https://www.fico.com/fico-xpress-optimization/docs/latest/insight5/compute/)
The single solve operations XPRSlpoptimize, XPRSmipoptimize, XPRSnlpoptimize, XPRSiis are supported. Calls to XPRSrepairinfeas and XPRStune which generate multiple problem solves are also supported and each solve will be outsourced to the remote Insight server. The number of parallel solves in the tuner is driven by the TUNERTHREADS control.
The Xpress Insight execution service and the local client application must be using the same major version of Xpress. Remote solves by Insight are supported by Xpress v8.10 and higher. Note: If you get a solve path difference, update the version of Xpress to match the version on the server with that on the client machine and check hardware controls, in particular threading controls. Solves will use the default execution service unless you specify one using the COMPUTEEXECSERVICE control or the configuration file as described in Appendix executionService.
Compute solves do not support the continuation of solves once they are interrupted, nor the multistart nonlinear algorithm.
A remote solve can be terminated by calling XPRSinterrupt. When called from the supported callbacks - with the exception of the message callback - this will stop the optimizer the same way as for a local solve. Otherwise, calling XPRSinterrupt outside of the supported callbacks will terminate the solve at the earliest opportunity, and no results will be generated.
The remote solve is resilient to a temporary loss of connection between client and server. Xpress will try to reconnect for a period of time and a message will appear in the run log if this is successful, or the solve will terminate with an error if it is not. If the connection between client and server is established when the connection between server and the executing worker is lost then the solve will be restarted to maintain determinism, and a computerestart callback will be fired to notify the calling application. Any work done by the disconnected remote worker, including any integer solution callbacks already fired, will be repeated.
Support for the following features are disabled when solving remotely and calling the related API methods will cause a runtime error:
- multiple solution pools,
- solution enumeration,
- callbacks not listed as supported above.
Authentication
Please refer to the Insight Compute Interface guide Chapter 2 and 3 for details of connecting Xpress to a remote Insight server. (https://www.fico.com/fico-xpress-optimization/docs/latest/insight5/compute/)
Callbacks
Callbacks are supported. When submitting a job to a remote machine, these callbacks are restricted to the message, barlog, miplog, lplog, cutlog, gapnotify, and intsol callbacks. Attempting to set any other callbacks will cause a runtime error. Controls can be changed in the usual way in all the supported callbacks with the exception of the message callback. Note: when solving remotely, the value of control SLPAUTOSAVE is always ignored.
Within the supported callbacks, calls can be made to functions that retrieve attributes and setting control values. The incumbent solution can be obtained by calling XPRSgetsolution. Calling any other API function will cause a runtime error, including any nonlinear and BCL API calls.
Any job that features callbacks which return hardware related attributes will use values from the remote server. For example, XPRS_CORESDETECTED will reflect the hardware on which the problem is being solved, not the hardware of the local client.
Licensing
When an Xpress application or an application embedding the Optimizer library is started with remote solving configured, the local license check is omitted and no local license is required to execute the application.
When a solve is started, the Optimizer instance will direct any operations that can be solved remotely to the remote server. This will also be the case if additional Optimizer instances are initiated as separate threads of the same process.
Advanced Configuration
There are some advanced settings that can be set using the Remote Solving Configuration file; this is described in section The Remote Solving Configuration file.
© 2001-2025 Fair Isaac Corporation. All rights reserved. This documentation is the property of Fair Isaac Corporation (“FICO”). Receipt or possession of this documentation does not convey rights to disclose, reproduce, make derivative works, use, or allow others to use it except solely for internal evaluation purposes to determine whether to purchase a license to the software described in this documentation, or as otherwise set forth in a written software license agreement between you and FICO (or a FICO affiliate). Use of this documentation and the software described in it must conform strictly to the foregoing permitted uses, and no other use is permitted.