Initializing help system before first use

Instability

Scaling

When developing a model and the definition of its input data users often produce problems that contain constraints and/or columns with large ratios in the absolute values of the largest and smallest coefficients. For example:

maximize: 106x + 7y = z
subject to: 106x + 0.1y 100
107x + 8y 500
1012x + 106y 50*106

Here the objective coefficients, constraint coefficients, and right–hand side values range between 0.1 and 1012. We say that the model is badly scaled.

During the optimization process, the Optimizer must perform many calculations involving subtraction and division of quantities derived from the constraints and the objective function. When these calculations are carried out with values differing greatly in magnitude, the finite precision of computer arithmetic and the fixed tolerances employed by FICO Xpress result in a build up of rounding errors to a point where the Optimizer can no longer reliably find the optimal solution.

To minimize undesirable effects, when formulating your problem try to choose units (or equivalently scale your problem) so that objective coefficients and matrix elements do not range by more than 106, and the right–hand side and non–infinite bound values do not exceed 106. One common problem is the use of large finite bound values to represent infinite bounds (i.e., no bounds) — if you have to enter explicit infinite bounds, make sure you use values greater than 1020 which will be interpreted as infinity by the Optimizer. Avoid having large objective values that have a small relative difference — this makes it hard for the dual simplex algorithm to solve the problem. Similarly, avoid having large right–hand side or bound values that are close together, but not identical.

In the above example, both the coefficient for x and the last constraint might be better scaled. Issues arising from the first may be overcome by column scaling, effectively a change of coordinates, with the replacement of 106x by some new variable. Those from the second may be overcome by row scaling. If we set x = 106 x' and scale the last row by 10-6, the example becomes the much better scaled problem:

maximize: x' + 7y = z
subject to: x' + 0.1y 100
10x' + 8y 500
x' + y 50

FICO Xpress also incorporates a number of automatic scaling options to improve the scaling of the matrix. However, the general techniques described below cannot replace attention to the choice of units specific to your problem. The best option is to scale your problem following the advice above, and use the automatic scaling provided by the Optimizer.

The form of scaling provided by the Optimizer depends on the setting of the bits of the control parameter SCALING. To get a particular form of scaling, set SCALING to the sum of the values corresponding to the scaling required. For instance, to get row scaling, column scaling and then row scaling again, set SCALING to 1+2+4=7. The scaling processing is applied after presolve and before the optimization algorithm. The most important of the defined bits are given in the following table. For a full list, refer to SCALING in Chapter Control Parameters

Bit Value Type of Scaling
0 1 Row scaling.
1 2 Column scaling.
2 4 Row scaling again.
3 8 Maximin.
4 16 Curtis–Reid.
5 32 0– scale by geometric mean;
1– scale by maximum element
(not applicable if maximin or Curtis–Reid is specified).
7 128 Objective function scaling.
8 256 Exclude the quadratic part of constraint when calculating scaling factors.
9 512 Scale the problem before presolve is applied.

If scaling is not required, SCALING should be set to 0.

If the user wants to get quick results when attempting to solve a badly scaled problem it may be useful to try running customized scaling on a problem before calling the optimization algorithm. To run the scaling process on a problem the user can call the routine XPRSscale(SCALE). The SCALING control determines how the scaling will be applied.

If the user is applying customized scaling to their problem and they are subsequently modifying the problem, it is important to note that the addition of new elements in the matrix can cause the problem to become badly scaled again. This can be avoided by reapplying their scaling strategy after completing their modifications to the matrix.

Finally, note that the scaling operations are determined by the matrix elements only. The objective coefficients, right hand side values and bound values do not influence the scaling. Only continuous variables (i.e., their bounds and coefficients) and constraints (i.e., their right–hand sides and coefficients) are scaled. Discrete entities such as integer variables are not scaled so the user should choose carefully the scaling of these variables.

Accuracy

The accuracy of the computed variable values and objective function value is affected in general by the various tolerances used in the Optimizer. Of particular relevance to MIP problems are the accuracy and cut off controls. The MIPRELCUTOFF control has a non–zero default value, which will prevent solutions very close but better than a known solution being found. This control can of course be set to zero if required.

When the LP solver stops at an optimal solution, the scaled constraints will be violated by no more than FEASTOL and the variables will be within FEASTOL of their scaled bounds. However once the constraints and variables have been unscaled the constraint and variable bound violation can increase to more than FEASTOL. If this happens then it indicates the problem is badly scaled. Reducing FEASTOL can help however this can cause the LP solve to be unstable and reduce solution performance.

However, for all problems it is probably ambitious to expect a level of accuracy in the objective of more than 1 in 1,000,000. Bear in mind that the default feasibility and optimality tolerances are 10-6. It is often not practially possible to compute the solution values and reduced costs from a basis, to an accuracy better than 10-8 anyway, particularly for large models. It depends on the condition number of the basis matrix and the size of the right–-hand side and cost coefficients. Under reasonable assumptions, an upper bound for the computed variable value accuracy is 4xKx∥RHS∥/1016, where ∥RHS∥ denotes the L–infinity norm of the right–hand side and K is the basis condition number. The basis condition number can be found using the XPRSbasiscondition (BASISCONDITION) function.

You should also bear in mind that the matrix is scaled, which would normally have the effect of increasing the apparent feasibility tolerance.


© 2001-2020 Fair Isaac Corporation. All rights reserved. This documentation is the property of Fair Isaac Corporation (“FICO”). Receipt or possession of this documentation does not convey rights to disclose, reproduce, make derivative works, use, or allow others to use it except solely for internal evaluation purposes to determine whether to purchase a license to the software described in this documentation, or as otherwise set forth in a written software license agreement between you and FICO (or a FICO affiliate). Use of this documentation and the software described in it must conform strictly to the foregoing permitted uses, and no other use is permitted.