Pyomo offers interfaces into multiple solvers, both commercial and open
source. To support better capabilities for solver interfaces, the Pyomo
team is actively redesigning the existing interfaces to make them more
maintainable and intuitive for use. A preview of the redesigned
interfaces can be found in pyomo.contrib.solver.
New Interface Usage
The new interfaces are not completely backwards compatible with the
existing Pyomo solver interfaces. However, to aid in testing and
evaluation, we are distributing versions of the new solver interfaces
that are compatible with the existing (“legacy”) solver interface.
These “legacy” interfaces are registered with the current
SolverFactory using slightly different names (to avoid conflicts
with existing interfaces).
Table 7 Available Redesigned Solvers and Names Registered
in the SolverFactories
Solver |
Name registered in the pyomo.contrib.solver.common.factory.SolverFactory |
Name registered in the pyomo.opt.base.solvers.LegacySolverFactory |
Ipopt |
ipopt
|
ipopt_v2
|
Gurobi (persistent) |
gurobi_persistent
|
gurobi_persistent_v2
|
Gurobi (direct) |
gurobi_direct
|
gurobi_direct_v2
|
HiGHS |
highs
|
highs
|
KNITRO |
knitro_direct
|
knitro_direct
|
GAMS |
gams
|
gams_v2
|
Using the new interfaces through the legacy interface
Here we use the new interface as exposed through the existing (legacy)
solver factory and solver interface wrapper. This provides an API that
is compatible with the existing (legacy) Pyomo solver interface and can
be used with other Pyomo tools / capabilities.
import pyomo.environ as pyo
model = pyo.ConcreteModel()
model.x = pyo.Var(initialize=1.5)
model.y = pyo.Var(initialize=1.5)
def rosenbrock(model):
return (1.0 - model.x) ** 2 + 100.0 * (model.y - model.x**2) ** 2
model.obj = pyo.Objective(rule=rosenbrock, sense=pyo.minimize)
status = pyo.SolverFactory('ipopt_v2').solve(model)
pyo.assert_optimal_termination(status)
model.pprint()
In keeping with our commitment to backwards compatibility, both the legacy and
future methods of specifying solver options are supported:
import pyomo.environ as pyo
model = pyo.ConcreteModel()
model.x = pyo.Var(initialize=1.5)
model.y = pyo.Var(initialize=1.5)
def rosenbrock(model):
return (1.0 - model.x) ** 2 + 100.0 * (model.y - model.x**2) ** 2
model.obj = pyo.Objective(rule=rosenbrock, sense=pyo.minimize)
# Backwards compatible
status = pyo.SolverFactory('ipopt_v2').solve(model, options={'max_iter' : 6})
# Forwards compatible
status = pyo.SolverFactory('ipopt_v2').solve(model, solver_options={'max_iter' : 6})
model.pprint()
Using the new interfaces directly
Here we use the new interface by importing it directly:
# Direct import
import pyomo.environ as pyo
from pyomo.contrib.solver.solvers.ipopt import Ipopt
model = pyo.ConcreteModel()
model.x = pyo.Var(initialize=1.5)
model.y = pyo.Var(initialize=1.5)
def rosenbrock(model):
return (1.0 - model.x) ** 2 + 100.0 * (model.y - model.x**2) ** 2
model.obj = pyo.Objective(rule=rosenbrock, sense=pyo.minimize)
opt = Ipopt()
status = opt.solve(model)
pyo.assert_optimal_termination(status)
# Displays important results information; only available through the new interfaces
status.display()
model.pprint()
Using the new interfaces through the “new” SolverFactory
Here we use the new interface by retrieving it from the new SolverFactory:
# Import through new SolverFactory
import pyomo.environ as pyo
from pyomo.contrib.solver.common.factory import SolverFactory
model = pyo.ConcreteModel()
model.x = pyo.Var(initialize=1.5)
model.y = pyo.Var(initialize=1.5)
def rosenbrock(model):
return (1.0 - model.x) ** 2 + 100.0 * (model.y - model.x**2) ** 2
model.obj = pyo.Objective(rule=rosenbrock, sense=pyo.minimize)
opt = SolverFactory('ipopt')
status = opt.solve(model)
pyo.assert_optimal_termination(status)
# Displays important results information; only available through the new interfaces
status.display()
model.pprint()
Switching all of Pyomo to use the new interfaces
We also provide a mechanism to get a “preview” of the future where we
replace the existing (legacy) SolverFactory and utilities with the new
(development) version (see Accessing preview features):
# Change default SolverFactory version
import pyomo.environ as pyo
from pyomo.__future__ import solver_factory_v3
model = pyo.ConcreteModel()
model.x = pyo.Var(initialize=1.5)
model.y = pyo.Var(initialize=1.5)
def rosenbrock(model):
return (1.0 - model.x) ** 2 + 100.0 * (model.y - model.x**2) ** 2
model.obj = pyo.Objective(rule=rosenbrock, sense=pyo.minimize)
status = pyo.SolverFactory('ipopt').solve(model)
pyo.assert_optimal_termination(status)
# Displays important results information; only available through the new interfaces
status.display()
model.pprint()
Linear Presolve and Scaling
The new interface allows access to new capabilities in the various
problem writers, including the linear presolve and scaling options
recently incorporated into the redesigned NL writer. For example, you
can control the NL writer in the new ipopt interface through the
solver’s writer_config configuration option (see the
Ipopt interface documentation).
from pyomo.contrib.solver.solvers.ipopt import Ipopt
opt = Ipopt()
opt.config.writer_config.display()
show_section_timing: false
skip_trivial_constraints: true
file_determinism: FileDeterminism.ORDERED
symbolic_solver_labels: false
scale_model: true
export_nonlinear_variables: None
row_order: None
column_order: None
export_defined_variables: true
linear_presolve: true
Note that, by default, both linear_presolve and scale_model are enabled.
Users can manipulate linear_presolve and scale_model to their preferred
states by changing their values.
>>> opt.config.writer_config.linear_presolve = False
Dual Sign Convention
For all future solver interfaces, Pyomo adopts the following sign convention. Given the problem
\[\begin{split}\begin{aligned}
\min\quad & f(x) \\
\text{s.t.}\quad & c_i(x) = 0 \quad \forall i \in \mathcal{E} \\
& g_i(x) \le 0 \quad \forall i \in \mathcal{U} \\
& h_i(x) \ge 0 \quad \forall i \in \mathcal{L}
\end{aligned}\end{split}\]
We define the Lagrangian as
\[\begin{aligned}
L(x, \lambda, \nu, \delta)
&= f(x)
- \sum_{i \in \mathcal{E}} \lambda_i\,c_i(x)
- \sum_{i \in \mathcal{U}} \nu_i\,g_i(x)
- \sum_{i \in \mathcal{L}} \delta_i\,h_i(x)
\end{aligned}\]
Then, the KKT conditions are [NW99]
\[\begin{split}\begin{aligned}
\nabla_x L(x, \lambda, \nu, \delta) &= 0 \\
c(x) &= 0 \\
g(x) &\le 0 \\
h(x) &\ge 0 \\
\nu &\le 0 \\
\delta &\ge 0 \\
\nu_i\,g_i(x) &= 0 \\
\delta_i\,h_i(x) &= 0
\end{aligned}\end{split}\]
Note that this sign convention is based on the (lower, body, upper)
representation of constraints rather than the expression provided by a
user. Users can specify constraints with variables on both the left- and
right-hand sides of equalities and inequalities. However, the
(lower, body, upper) representation ensures that all variables
appear in the body, matching the form of the problem above.
For maximization problems of the form
\[\begin{split}\begin{aligned}
\max\quad & f(x) \\
\text{s.t.}\quad & c_i(x) = 0 \quad \forall i \in \mathcal{E} \\
& g_i(x) \le 0 \quad \forall i \in \mathcal{U} \\
& h_i(x) \ge 0 \quad \forall i \in \mathcal{L}
\end{aligned}\end{split}\]
we define the Lagrangian to be the same as above:
\[\begin{aligned}
L(x, \lambda, \nu, \delta)
&= f(x)
- \sum_{i \in \mathcal{E}} \lambda_i\,c_i(x)
- \sum_{i \in \mathcal{U}} \nu_i\,g_i(x)
- \sum_{i \in \mathcal{L}} \delta_i\,h_i(x)
\end{aligned}\]
As a result, the signs of the duals change. The KKT conditions are
\[\begin{split}\begin{aligned}
\nabla_x L(x, \lambda, \nu, \delta) &= 0 \\
c(x) &= 0 \\
g(x) &\le 0 \\
h(x) &\ge 0 \\
\nu &\ge 0 \\
\delta &\le 0 \\
\nu_i\,g_i(x) &= 0 \\
\delta_i\,h_i(x) &= 0
\end{aligned}\end{split}\]
Pyomo also supports “range constraints” which are inequalities with both upper
and lower bounds, where the bounds are not equal. For example,
\[-1 \leq x + y \leq 1\]
These are handled very similarly to variable bounds in terms of dual sign
conventions. For these, at most one “side” of the inequality can be active
at a time. If neither side is active, then the dual will be zero. If the dual
is nonzero, then the dual corresponds to the side of the constraint that is
active. The dual for the other side will be implicitly zero. When accessing
duals, the keys are the constraints. As a result, there is only one key for
a range constraint, even though it is really two constraints. Therefore, the
dual for the inactive side will not be reported explicitly. Again, the sign
convention is based on the (lower, body, upper) representation of the
constraint. Therefore, the left side of this inequality belongs to
\(\mathcal{L}\) and the right side belongs to \(\mathcal{U}\).