Despite the maturity of LP technology, some use cases require more advanced techniques. For example, a number of different LP algorithms and implementations are available, each of which has strengths and weaknesses. Furthermore, numerical instability can cause solvers to slow down or fail to solve certain models.
This guide introduces the concepts and provides examples to help you get the most performance and reliability out of LP solvers.
Concepts
This section presents key concepts for advanced use of LP solvers. We assume that readers are familiar with the concept of duality in LP.
Families of LP algorithms
The following classes of algorithms for LP are accessible via ORTools.
The Simplex algorithm was the first practical LP algorithm and remains the most popular. The algorithm walks along the vertices (corner points) of the feasible region, iteratively improving the value of the objective function until reaching an optimal solution. There are two types of simplex algorithms:
 Primal simplex takes steps along the vertices of the primal feasible region. This variant is particularly effective at solving a sequence of LP problems with varying objective functions, that is, where the primal feasible region is fixed.
 Dual simplex takes steps along the vertices of the dual feasible region. This variant is particularly effective at solving a sequence of LP problems where the dual feasible region is fixed, for example, when only bounds on variables change. For this reason, dual simplex is used extensively in MIP solvers.
Barrier or interiorpoint methods were the second practical family of algorithms for LP. Barrier methods pair theoretical guarantees of efficient (polynomial time) convergence with reliable performance in practice. They complement the simplex algorithm when it performs poorly; for example, some solvers offer to run both simplex and barrier in parallel, returning the solution from the algorithm that finishes first.
Firstorder methods are a family of algorithms that use exclusively gradient information (that is, firstorder derivatives) to guide the iterations. Gradient descent is a wellknown example. These methods are popular in nonlinear optimization and machine learning. For LP, firstorder methods can scale to larger problems than simplex and barrier, and may also have much smaller memory requirements. On the other thand, they are more sensitive to numerical issues and may struggle to obtain highly accurate solutions.
The table below lists the LP solvers available in ORTools and indicates which of the three families of algorithms is implemented in each solver.
Solver  Simplex  Barrier  First order 

Clp  X  X  
CPLEX^{L}  X  X  
Glop^{G}  X  
GLPK  X  X  
Gurobi^{L}  X  X  
PDLP^{G}  X  
Xpress^{L}  X  X 
G indicates the solver is developed by Google. L indicates that the solver requires a license issued by the respective thirdparty developer.
See Recommendations for suggestions on which solvers and algorithms to use.
What does solving really mean?
For getting the most out of LP solvers, it's important to understand what is implied when a solver claims to have found a solution to an LP problem. This section covers the basics necessary for answering this question, in particular given numerical imprecision and nonuniqueness of solutions.
Tolerances
LP solvers almost always use floatingpoint arithmetic, making their solutions subject to numerical imprecision. To account for this, and to improve performance by avoiding effort on solutions that are already good enough, solvers accept solutions—and claim to have solved a problem—when these solutions satisfy conditions up to certain tolerances.
Consider the linear programming problem
and its corresponding dual problem
This pair of problems has a unique optimal primal solution of $ x_1 = 1, x_2 = 0 $ and dual solution $ y = 2 $. Which solutions could be accepted as optimal by a solver? To answer this, we define the following quantities:
 The duality gap is the difference between the primal objective value and the dual objective value, in this case, $ (2x_1  x_2)  y $.
 The primal infeasibilities are the violations of the primal constraints, in this case, $ (\max\{ (x_1+x_2)  1, 0 \}, \max\{x_1, 0\}, \max\{x_2, 0\}) $.
 The dual infeasibilities are the violations of the dual constraints, in this case, $ (\max\{ 2 + y, 0 \}, \max\{ 1 + y, 0 \}, \max\{ y, 0 \}) $.
A solver declares a solution as optimal if the duality gap, the primal infeasibilities, and the dual infeasibilities are smaller than a given tolerance.^{1}
Notably, the application of the tolerances varies for both natural and idiosyncratic reasons across solvers and algorithms. For example, the duality gap in the simplex algorithm is driven only by numerical imprecision, while the primal and dual infeasibilities are present even in exact arithmetic. Some methods directly enforce the bound constraints $ x_1 \ge 0, x_2 \ge 0, $ and $ y \le 0 $, while others treat violations of bound constraints differently from violations of linear constraints like $x_1 + x_2 \le 1$. For some solvers, tolerances are absolute; that is, there is a parameter $ \epsilon $, and solutions are considered optimal if the duality gap and all primal and dual infeasibilities are less than or equal to $ \epsilon $. For other solvers, tolerances are relative, meaning that they scale with the size of the coefficients in the problem.
For an example of the effect of tolerances, consider an absolute tolerance of $ \epsilon = \frac{1}{2} $ applied to the above primaldual pair. The solution $ x_1 = 1.5, x_2 = 0, y = 3 $ has zero duality gap and infeasibilities all less than or equal to $ \epsilon $, hence a solver might declare this solution "optimal". Yet, its objective value (3) differs by 1 from the true optimal objective value of 2. Typical default values of $ \epsilon $ are between $10^{6}$ and $10^{8}$, which makes such extreme examples rare but not impossible.
Types of solutions
LP problems can have more than one optimal solution, even more so when accounting for tolerances. We briefly discuss the properties of solutions returned by the three different families of LP algorithms presented above.
Simplex algorithms always return vertices or corner points of the feasible region. These solutions are preferred in some situations because they tend to be sparser.
Barrier and firstorder methods do not generally return vertices. (Theory provides additional characterizations that are beyond the scope of this guide.)
For historical reasons and because vertex solutions have appealing properties, solvers often, by default, apply a crossover procedure to move to an optimal vertex from a solution found by a barrier algorithm. Crossover is not currently offered for solutions found by firstorder methods.
Recommendations
We make the following recommendations for advanced use of LP solvers.
Scaling of problem data
Solvers can experience slow convergence or failures on models because of numerical issues. Such issues can arise for many reasons; here we give one example.
It is common for very small or large numerical constants to appear in LP models. Extending the example from above, if \(x_1\) and \(x_2\) represent the fraction of customers assigned to "provider 1" or "provider 2", and if we want to maximize benefit from serving these customers, we might write the following objective function,
where:
 $ c_1 $ is the benefit from assigning customers to provider 1
 $ c_2 $ is the benefit from assigning customers to provider 2
Duals satisfying the following constraints would be considered feasible with an absolute tolerance $ \epsilon $:
 $ y \le c_1 + \epsilon $,
 $ y \le c_2 + \epsilon $.
If the benefit units of $ c_1 $ and $ c_2 $ are small fractional values that happen to be on the same scale as $ \epsilon $, then the dual feasibility conditions become rather weak, hence a very suboptimal primal may be declared optimal.
If, on the other hand, the benefit units are "microdollars" (1 000 000 microdollars = 1 dollar), the resulting very large absolute values ask for very high precision in the solution, possibly unreasonably high given the limits of floating point numbers. Solvers may fail to converge if the problem is formulated in this way. As part of formulating a wellposed problem, advanced modelers should consider if the problem data are scaled in a way that's consistent with their solver's tolerances.
In addition to avoiding numerical failures, tolerances may also be tuned for better performance. Simplex and barrier methods are not too sensitive to tolerances but occasionally might benefit from larger tolerances if progress is observed to stall at the end of the solve. On the other hand, firstorder methods are typically much more sensitive. Users of firstorder methods can benefit from faster solutions by relaxing tolerances, as the context allows.
For Glop's tolerances, see primal_feasibility_tolerance
,
dual_feasibility_tolerance
, and solution_feasibility_tolerance
in
GlopParameters
.
Note that primal_feasibility_tolerance
and dual_feasibility_tolerance
are
used within the algorithm, while solution_feasibility_tolerance
is checked
postsolve to flag numerical issues.
For PDLP, see eps_optimal_absolute
and eps_optimal_relative
.
For further reading on these types of issues, see Gurobi's Guidelines for Numerical Issues. While the guidelines are written for users of Gurobi, many of the principles apply generally.
Choice of solvers and algorithms
We start off with an example of how large the impact of the choice of solvers and algorithms can be and then present a guide for choosing LP solvers.
Variability in practice
We illustrate the variability in performance across LP algorithms and solvers by comparing the solve times on a selection of instances that have been used by Hans Mittelmann for benchmarking LP solvers. The instances are explicitly chosen to show the extremes of relative performance and are not necessarily representative of typical behavior.
Glop's primal and dual simplex methods are compared with Gurobi's barrier method (with and without crossover, which finds a vertex solution) and PDLP, a firstorder method, in high and low precision. The table below reports solve times in seconds, with a limit of 20 minutes (1200 seconds).
Instance  Glop Primal Simplex 
Glop Dual Simplex 
Gurobi Barrier with Crossover 
Gurobi Barrier without Crossover 
PDLP High Precision 
PDLP Low Precision 

ex10  >1200  >1200  79.7  63.5  8.2  2.7 
nug083rd  >1200  252.8  144.6  3.2  1.1  0.9 
savsched1  >1200  >1200  156.0  22.6  46.4  32.4 
wide15  >1200  20.8  >1200  >1200  916.4  56.3 
buildingenergy  178.4  267.5  12.8  12.3  >1200  157.2 
s250r10  12.1  520.6  15.2  16.4  >1200  >1200 
Solver Version  ORTools 9.3  ORTools 9.3  Gurobi 9.0  Gurobi 9.0  ORTools 9.3  ORTools 9.3 
solver_specific_parameters 
(defaults)  use_dual_simplex: true 
Method 2, Threads 1 
Method 2, Crossover 0, Threads 1 
termination_criteria { eps_optimal_absolute: 1e8 eps_optimal_relative: 1e8 } 
termination_criteria { eps_optimal_absolute: 1e4 eps_optimal_relative: 1e4 } 
From these results we conclude the following.
 The relative performance of algorithms and solvers can vary by orders of magnitude on a single instance.
 No single algorithm and solver is uniformly better than others.
 Crossover (enabled by default) increases solve time, sometimes substantially.
 PDLP can solve to low precision sometimes 10 times faster than to high precision.
A brief guide for choosing LP solvers
Given that no single LP algorithm or solver is the best, we recommend the following steps for discovering what is best for your use case. Start with the first step and proceed to the next if performance is not sufficient.
 Try Glop. Why: Glop is Google's inhouse implementation of the primal
and dual simplex methods. Glop is open source and trusted for Google's
production workloads.
 If the default configuration (primal simplex) doesn't perform well, try
switching to dual simplex using
use_dual_simplex: true
.
 If the default configuration (primal simplex) doesn't perform well, try
switching to dual simplex using
 If a license is available, try a commercial solver (CPLEX, Gurobi, or
Xpress). Experiment with simplex and barrier methods. Why: These solvers
are industry standard and highly optimized. Also, barrier methods complement
the simplex algorithms offered by Glop.
 If using barrier, disable "crossover" if you do not need a vertex solution.
 Try PDLP. Tune the convergence tolerances to your application. Why: PDLP is designed for the largest problems, where simplex and barrier methods hit memory limits or are too slow. PDLP performs best when an approximate but quick solution is preferred to an exact but slow solution.
 If you have made it this far, you are now an advanced LP user! Please see ORTools support options for further help.

It is often more complex than this. Solvers typically check these conditions on a transformed/simplified version of the problem, called the presolved problem. In some cases, a solution to the presolved problem may be far away from a solution to the input problem. This situation can lead to unusual statuses like CPLEX's
OptimalInfeas
or Glop'sIMPRECISE
. ↩