cobyqa.minimize#

cobyqa.minimize(fun, x0, args=(), bounds=None, constraints=(), callback=None, options=None, **kwargs)[source]#

Minimize a scalar function using the COBYQA method.

The Constrained Optimization BY Quadratic Approximations (COBYQA) method is a derivative-free optimization method designed to solve general nonlinear optimization problems. A complete description of COBYQA is given in [3].

Parameters:
fun{callable, None}

Objective function to be minimized.

fun(x, *args) -> float

where x is an array with shape (n,) and args is a tuple. If fun is None, the objective function is assumed to be the zero function, resulting in a feasibility problem.

x0array_like, shape (n,)

Initial guess.

argstuple, optional

Extra arguments passed to the objective function.

bounds{scipy.optimize.Bounds, array_like, shape (n, 2)}, optional

Bound constraints of the problem. It can be one of the cases below.

  1. An instance of scipy.optimize.Bounds. For the time being, the argument keep_feasible is disregarded.

  2. An array with shape (n, 2). The bound constraints for x[i] are bounds[i][0] <= x[i] <= bounds[i][1]. Set bounds[i][0] to \(-\infty\) if there is no lower bound, and set bounds[i][1] to \(\infty\) if there is no upper bound.

The COBYQA method always respect the bound constraints.

constraints{Constraint, dict, list}, optional

General constraints of the problem. It can be one of the cases below.

  1. An instance of scipy.optimize.LinearConstraint. The argument keep_feasible is disregarded.

  2. An instance of scipy.optimize.NonlinearConstraint. The arguments jac, hess, keep_feasible, finite_diff_rel_step, and finite_diff_jac_sparsity are disregarded.

  3. A dictionary with fields:

    type{‘eq’, ‘ineq’}

    Whether the constraint is an equality fun(x, *args) == 0 or an inequality fun(x, *args) >= 0.

    funcallable

    Constraint function.

    argstuple, optional

    Extra arguments passed to the constraint function.

  4. A list, each of whose elements are described in the cases above.

callbackcallable, optional

A callback executed at each objective function evaluation. The method terminates if a StopIteration exception is raised by the callback function. Its signature can be one of the following:

callback(intermediate_result)

where intermediate_result is a keyword parameter that contains an instance of scipy.optimize.OptimizeResult, with attributes x and fun, being the point at which the objective function is evaluated and the value of the objective function, respectively. The name of the parameter must be intermediate_result for the callback to be passed an instance of scipy.optimize.OptimizeResult.

Alternatively, the callback function can have the signature:

callback(xk)

where xk is the point at which the objective function is evaluated. Introspection is used to determine which of the signatures to invoke.

optionsdict, optional

Options passed to the solver. Accepted keys are:

dispbool, optional

Whether to print information about the optimization procedure.

maxfevint, optional

Maximum number of function evaluations.

maxiterint, optional

Maximum number of iterations.

targetfloat, optional

Target on the objective function value. The optimization procedure is terminated when the objective function value of a feasible point is less than or equal to this target.

feasibility_tolfloat, optional

Tolerance on the constraint violation. If the maximum constraint violation at a point is less than or equal to this tolerance, the point is considered feasible.

radius_initfloat, optional

Initial trust-region radius. Typically, this value should be in the order of one tenth of the greatest expected change to x0.

radius_finalfloat, optional

Final trust-region radius. It should indicate the accuracy required in the final values of the variables.

nb_pointsint, optional

Number of interpolation points used to build the quadratic models of the objective and constraint functions.

scalebool, optional

Whether to scale the variables according to the bounds.

filter_sizeint, optional

Maximum number of points in the filter. The filter is used to select the best point returned by the optimization procedure.

store_historybool, optional

Whether to store the history of the function evaluations.

history_sizeint, optional

Maximum number of function evaluations to store in the history.

debugbool, optional

Whether to perform additional checks during the optimization procedure. This option should be used only for debugging purposes and is highly discouraged to general users.

Other constants (from the keyword arguments) are described below. They are not intended to be changed by general users. They should only be changed by users with a deep understanding of the algorithm, who want to experiment with different settings.

Returns:
scipy.optimize.OptimizeResult

Result of the optimization procedure, with the following fields:

messagestr

Description of the cause of the termination.

successbool

Whether the optimization procedure terminated successfully.

statusint

Termination status of the optimization procedure.

xnumpy.ndarray, shape (n,)

Solution point.

funfloat

Objective function value at the solution point.

maxcvfloat

Maximum constraint violation at the solution point.

nfevint

Number of function evaluations.

nitint

Number of iterations.

If store_history is True, the result also has the following fields:

fun_historynumpy.ndarray, shape (nfev,)

History of the objective function values.

maxcv_historynumpy.ndarray, shape (nfev,)

History of the maximum constraint violations.

A description of the termination statuses is given below.

Exit status

Description

0

The lower bound for the trust-region radius has been reached.

1

The target objective function value has been reached.

2

All variables are fixed by the bound constraints.

3

The callback requested to stop the optimization procedure.

4

The feasibility problem received has been solved successfully.

5

The maximum number of function evaluations has been exceeded.

6

The maximum number of iterations has been exceeded.

-1

The bound constraints are infeasible.

-2

A linear algebra error occurred.

Other Parameters:
decrease_radius_factorfloat, optional

Factor by which the trust-region radius is reduced when the reduction ratio is low or negative.

increase_radius_factorfloat, optional

Factor by which the trust-region radius is increased when the reduction ratio is large.

increase_radius_thresholdfloat, optional

Threshold that controls the increase of the trust-region radius when the reduction ratio is large.

decrease_radius_thresholdfloat, optional

Threshold used to determine whether the trust-region radius should be reduced to the resolution.

decrease_resolution_factorfloat, optional

Factor by which the resolution is reduced when the current value is far from its final value.

large_resolution_thresholdfloat, optional

Threshold used to determine whether the resolution is far from its final value.

moderate_resolution_thresholdfloat, optional

Threshold used to determine whether the resolution is close to its final value.

low_ratiofloat, optional

Threshold used to determine whether the reduction ratio is low.

high_ratiofloat, optional

Threshold used to determine whether the reduction ratio is high.

very_low_ratiofloat, optional

Threshold used to determine whether the reduction ratio is very low. This is used to determine whether the models should be reset.

penalty_increase_thresholdfloat, optional

Threshold used to determine whether the penalty parameter should be increased.

penalty_increase_factorfloat, optional

Factor by which the penalty parameter is increased.

short_step_thresholdfloat, optional

Factor used to determine whether the trial step is too short.

low_radius_factorfloat, optional

Factor used to determine which interpolation point should be removed from the interpolation set at each iteration.

byrd_omojokun_factorfloat, optional

Factor by which the trust-region radius is reduced for the computations of the normal step in the Byrd-Omojokun composite-step approach.

threshold_ratio_constraintsfloat, optional

Threshold used to determine which constraints should be taken into account when decreasing the penalty parameter.

large_shift_factorfloat, optional

Factor used to determine whether the point around which the quadratic models are built should be updated.

large_gradient_factorfloat, optional

Factor used to determine whether the models should be reset.

resolution_factorfloat, optional

Factor by which the resolution is decreased.

improve_tcgbool, optional

Whether to improve the steps computed by the truncated conjugate gradient method when the trust-region boundary is reached.

References

[1]

J. Nocedal and S. J. Wright. Numerical Optimization. Springer Ser. Oper. Res. Financ. Eng. Springer, New York, NY, USA, second edition, 2006. doi:10.1007/978-0-387-40065-5.

[2]

M. J. D. Powell. A direct search optimization method that models the objective and constraint functions by linear interpolation. In S. Gomez and J.-P. Hennart, editors, Advances in Optimization and Numerical Analysis, volume 275 of Math. Appl., pages 51–67. Springer, Dordrecht, Netherlands, 1994. doi:10.1007/978-94-015-8330-5_4.

[3]

T. M. Ragonneau. Model-Based Derivative-Free Optimization Methods and Software. PhD thesis, Department of Applied Mathematics, The Hong Kong Polytechnic University, Hong Kong, China, 2022. URL: https://theses.lib.polyu.edu.hk/handle/200/12294.

Examples

To demonstrate how to use minimize, we first minimize the Rosenbrock function implemented in scipy.optimize in an unconstrained setting.

>>> from cobyqa import minimize
>>> from scipy.optimize import rosen

To solve the problem using COBYQA, run:

>>> x0 = [1.3, 0.7, 0.8, 1.9, 1.2]
>>> res = minimize(rosen, x0)
>>> res.x
array([1., 1., 1., 1., 1.])

To see how bound and constraints are handled using minimize, we solve Example 16.4 of [1], defined as

\[\begin{split}\begin{aligned} \min_{x \in \mathbb{R}^2} & \quad (x_1 - 1)^2 + (x_2 - 2.5)^2\\ \text{s.t.} & \quad -x_1 + 2x_2 \le 2,\\ & \quad x_1 + 2x_2 \le 6,\\ & \quad x_1 - 2x_2 \le 2,\\ & \quad x_1 \ge 0,\\ & \quad x_2 \ge 0. \end{aligned}\end{split}\]
>>> import numpy as np
>>> from scipy.optimize import Bounds, LinearConstraint

Its objective function can be implemented as:

>>> def fun(x):
...     return (x[0] - 1.0) ** 2.0 + (x[1] - 2.5) ** 2.0

This problem can be solved using minimize as:

>>> x0 = [2.0, 0.0]
>>> bounds = Bounds([0.0, 0.0], np.inf)
>>> constraints = LinearConstraint([
...     [-1.0, 2.0],
...     [1.0, 2.0],
...     [1.0, -2.0],
... ], -np.inf, [2.0, 6.0, 2.0])
>>> res = minimize(fun, x0, bounds=bounds, constraints=constraints)
>>> res.x
array([1.4, 1.7])

Finally, to see how nonlinear constraints are handled, we solve Problem (F) of [2], defined as

\[\begin{split}\begin{aligned} \min_{x \in \mathbb{R}^2} & \quad -x_1 - x_2\\ \text{s.t.} & \quad x_1^2 - x_2 \le 0,\\ & \quad x_1^2 + x_2^2 \le 1. \end{aligned}\end{split}\]
>>> from scipy.optimize import NonlinearConstraint

Its objective and constraint functions can be implemented as:

>>> def fun(x):
...     return -x[0] - x[1]
>>>
>>> def cub(x):
...     return [x[0] ** 2.0 - x[1], x[0] ** 2.0 + x[1] ** 2.0]

This problem can be solved using minimize as:

>>> x0 = [1.0, 1.0]
>>> constraints = NonlinearConstraint(cub, -np.inf, [0.0, 1.0])
>>> res = minimize(fun, x0, constraints=constraints)
>>> res.x
array([0.707, 0.707])