cobyqa.minimize#

cobyqa.minimize(fun, x0, args=(), xl=None, xu=None, aub=None, bub=None, aeq=None, beq=None, cub=None, ceq=None, options=None, **kwargs)[source]#

Minimize a real-valued function.

The minimization can be subject to bound, linear-inequality, linear-equality, nonlinear-inequality, and nonlinear-equality constraints. Although the solver may encounter infeasible points (including the initial guess x0), the bounds constraints (if any) are always respected.

Parameters:
funcallable

Objective function to be minimized.

fun(x, *args) -> float

where x is an array with shape (n,) and args is a tuple.

x0array_like, shape (n,)

Initial guess.

argstuple, optional

Parameters of the objective and constraint functions.

xlarray_like, shape (n,), optional

Lower-bound constraints on x. Use -numpy.inf to disable the bound constraints on some variables.

xuarray_like, shape (n,), optional

Upper-bound constraints on x. Use numpy.inf to disable the bound constraints on some variables.

aubarray_like, shape (mlub, n), optional

Jacobian matrix of the linear inequality constraints. Each row of aub stores the gradient of a linear inequality constraint.

bubarray_like, shape (mlub,)

Right-hand side of the linear inequality constraints aub @ x <= bub.

aeqarray_like, shape (mleq, n), optional

Jacobian matrix of the linear equality constraints. Each row of aeq stores the gradient of a linear equality constraint.

beqarray_like, shape (mleq,)

Right-hand side of the linear equality constraints aeq @ x = beq.

cubcallable

Nonlinear inequality constraint function.

cub(x, *args) -> array_like, shape (mnlub,)

where x is an array with shape (n,) and args is a tuple.

ceqcallable

Nonlinear equality constraint function.

ceq(x, *args) -> array_like, shape (mnleq,)

where x is an array with shape (n,) and args is a tuple.

optionsdict, optional

Options to forward to the solver. Accepted options are:

rhobegfloat, optional

Initial trust-region radius (default is 1.0).

rhoendfloat, optional

Final trust-region radius (default is 1e-6).

nptint, optional

Number of interpolation points (default is 2 * n + 1).

maxfevint, optional

Maximum number of function evaluations (default is 500 * n).

maxiter: int, optional

Maximum number of iterations (default is 1000 * n).

targetfloat, optional

Target value on the objective function (default is -numpy.inf). If the solver encounters a (nearly) feasible point at which the objective function evaluation is below the target value, then the computations are stopped.

ftol_absfloat, optional

Absolute tolerance on the objective function.

ftol_relfloat, optional

Relative tolerance on the objective function.

xtol_absfloat, optional

Absolute tolerance on the decision variables.

xtol_relfloat, optional

Relative tolerance on the decision variables.

dispbool, optional

Whether to print pieces of information on the execution of the optimizer (default is False).

debugbool, optional

Whether to make debugging tests during the execution, which is not recommended in production (default is False).

Returns:
OptimizeResult

Result of the optimizer. Important attributes are: x the solution point, success a flag indicating whether the optimization terminated successfully, and message a description of the termination status of the optimization. See OptimizeResult for details.

Other Parameters:
store_histbool, optional

Whether to store the histories of the points at which the objective and constraint functions have been evaluated (default is False).

eta1float, optional

If the trust-region ratio is smaller than or equal to eta1, then the trust-region radius is decreased (default is 0.1).

eta2float, optional

If the trust-region ratio is larger than eta2, then the trust-region radius is increased (default is 0.7).

eta3float, optional

The lower bound on the trust-region radius is considered small if it is smaller than or equal to eta3 * options["rhoend"] (default is 16).

eta4float, optional

The lower bound on the trust-region radius is considered large if it is larger than eta4 * options["rhoend"] (default is 250).

eta5float, optional

If the trust-region ratio is larger than eta5, then it is considered too large for restarting the trust-region models (default is 0.01).

upsilon1float, optional

If the penalty parameter is smaller than or equal to upsilon1 times the smallest theoretical threshold, it is increased (default is 1.5).

upsilon2float, optional

Factor by which the penalty parameter is increased (default is 2).

theta1float, optional

Factor by which the trust-region radius is decreased (default is 0.5).

theta2float, optional

If the trust-region radius is smaller than or equal to theta2 times the lower bound on the trust-region radius, then it is set to the lower bound on the trust-region radius (default is 1.4).

theta3float, optional

Factor by which the trust-region radius is increased (default is \(\sqrt{2}\)).

theta4float, optional

An empirical factor to increase the trust-region radius (default is 2).

theta5float, optional

Factor by which the lower bound on the trust-region radius is decreased (default is 0.1).

zetafloat, optional

Factor by which the trust-region radius is decreased in the normal subproblem of the Byrd-Omojokun approach (default is 0.8).

References

[1]

J. Nocedal and S. J. Wright. Numerical Optimization. Second. Springer Ser. Oper. Res. Financ. Eng. New York, NY, US: Springer, 2006.

[2]

M. J. D. Powell. “A direct search optimization method that models the objective and constraint functions by linear interpolation.” In: Advances in Optimization and Numerical Analysis. Edited by S. Gomez and J. P. Hennart. Dordrecht, NL: Springer, 1994, pages 51–67.

Examples

We consider the problem of minimizing the Rosenbrock function as implemented in the scipy.optimize module.

>>> from scipy.optimize import rosen
>>> from cobyqa import minimize

An application of the minimize function in the unconstrained case may be:

>>> x0 = [1.3, 0.7, 0.8, 1.9, 1.2]
>>> res = minimize(rosen, x0)
>>> res.x
array([1., 1., 1., 1., 1.])

We now consider Example 16.4 of [1], defined as

\[\begin{split}\begin{aligned} \min_{x \in \R^2} & \quad (x_1 - 1)^2 + (x_2 - 2.5)^2\\ \text{s.t.} & \quad -x_1 + 2x_2 \le 2,\\ & \quad x_1 + 2x_2 \le 6,\\ & \quad x_1 - 2x_2 \le 2,\\ & \quad x_1 \ge 0,\\ & \quad x_2 \ge 0. \end{aligned}\end{split}\]

Its objective function can be implemented as:

>>> def quadratic(x):
...     return (x[0] - 1.0) ** 2.0 + (x[1] - 2.5) ** 2.0

This problem can be solved using minimize as:

>>> x0 = [2.0, 0.0]
>>> xl = [0.0, 0.0]
>>> aub = [[-1.0, 2.0], [1.0, 2.0], [1.0, -2.0]]
>>> bub = [2.0, 6.0, 2.0]
>>> res = minimize(quadratic, x0, xl=xl, aub=aub, bub=bub)
>>> res.x
array([1.4, 1.7])

Moreover, although clearly unreasonable in this case, the constraints can also be provided as:

>>> def cub(x):
...     c1 = -x[0] + 2.0 * x[1] - 2.0
...     c2 = x[0] + 2.0 * x[1] - 6.0
...     c3 = x[0] - 2.0 * x[1] - 2.0
...     return [c1, c2, c3]

This problem can be solved using minimize as:

>>> res = minimize(quadratic, x0, xl=xl, cub=cub)
>>> res.x
array([1.4, 1.7])

To conclude, let us consider Problem G of [2], defined as

\[\begin{split}\begin{aligned} \min_{x \in \R^3} & \quad f(x) = x_3\\ \text{s.t.} & \quad -5x_1 + x_2 - x_3 \le 0,\\ & \quad 5x_1 + x_2 - x_3 \le 0,\\ & \quad x_1^2 + x_2^2 + 4x_2 - x_3 \le 0. \end{aligned}\end{split}\]

Its only nonlinear constraints can be implemented in Python as:

>>> def cub(x):
...     return x[0] ** 2.0 + x[1] ** 2.0 + 4.0 * x[1] - x[2]

This problem can be solved using minimize as:

>>> x0 = [1.0, 1.0, 1.0]
>>> aub = [[-5.0, 1.0, -1.0], [5.0, 1.0, -1.0]]
>>> bub = [0.0, 0.0]
>>> res = minimize(lambda x: x[2], x0, aub=aub, bub=bub, cub=cub)
>>> res.x
array([ 0., -3., -3.])