cobyqa.minimize#

cobyqa.minimize(fun, x0, args=(), xl=None, xu=None, aub=None, bub=None, aeq=None, beq=None, cub=None, ceq=None, options=None)[source]#

Minimize a scalar function using the COBYQA method.

The COBYQA method is a derivative-free optimization method for general nonlinear optimization described in [3].

Parameters:
fun{callable, None}

Objective function to be minimized.

fun(x, *args) -> float

where x is an array with shape (n,) and args is a tuple.

x0array_like, shape (n,)

Initial guess.

argstuple, optional

Extra arguments passed to the objective and constraints function.

xlarray_like, shape (n,), optional

Lower bounds on the variables xl <= x.

xuarray_like, shape (n,), optional

Upper bounds on the variables x <= xu.

aubarray_like, shape (m_linear_ub, n), optional

Left-hand side matrix of the linear inequality constraints aub @ x <= bub.

bubarray_like, shape (m_linear_ub,), optional

Right-hand side vector of the linear inequality constraints aub @ x <= bub.

aeqarray_like, shape (m_linear_eq, n), optional

Left-hand side matrix of the linear equality constraints aeq @ x == beq.

beqarray_like, shape (m_linear_eq,), optional

Right-hand side vector of the linear equality constraints aeq @ x == beq.

cubcallable, optional

Nonlinear inequality constraints function cub(x, *args) <= 0.

cub(x, *args) -> array_like, shape (m_nonlinear_ub,)

where x is an array with shape (n,) and args is a tuple.

ceqcallable, optional

Nonlinear equality constraints function ceq(x, *args) == 0.

ceq(x, *args) -> array_like, shape (m_nonlinear_eq,)

where x is an array with shape (n,) and args is a tuple.

optionsdict, optional

Options passed to the solver. Accepted keys are:

verbosebool, optional

Whether to print information about the optimization procedure.

max_evalint, optional

Maximum number of function evaluations.

max_iterint, optional

Maximum number of iterations.

targetfloat, optional

Target on the objective function value. The optimization procedure is terminated when the objective function value of a nearly feasible point is less than or equal to this target.

store_histbool, optional

Whether to store the history of the function evaluations.

hist_sizeint, optional

Maximum number of function evaluations to store in the history.

radius_initfloat, optional

Initial trust-region radius.

radius_finalfloat, optional

Final trust-region radius.

nptint, optional

Number of interpolation points.

debugbool, optional

Whether to perform additional checks. This option should be used only for debugging purposes and is highly discouraged.

Returns:
scipy.optimize.OptimizeResult

Result of the optimization procedure, which has the following fields:

messagestr

Description of the cause of the termination.

successbool

Whether the optimization procedure terminated successfully.

statusint

Termination status of the optimization procedure.

xndarray, shape (n,)

Solution point.

funfloat

Objective function value at the solution point.

maxcvfloat

Maximum constraint violation at the solution point.

nitint

Number of iterations.

nfevint

Number of function evaluations.

References

[1]

J. Nocedal and S. J. Wright. Numerical Optimization. Springer Series in Operations Research and Financial Engineering. Springer, New York, NY, USA, second edition, 2006.

[2]

M. J. D. Powell. A direct search optimization method that models the objective and constraint functions by linear interpolation. In S. Gomez and J. P. Hennart, editors, Advances in Optimization and Numerical Analysis, volume 275 of Mathematics and Its Applications, pages 51–67. Springer, Dordrecht, The Netherlands, 1994.

[3]

T. M. Ragonneau. Model-Based Derivative-Free Optimization Methods and Software. PhD thesis, The Hong Kong Polytechnic University, Hong Kong, China, 2022.

Examples

We first minimize the Rosenbrock function implemented in scipy.optimize.

>>> from scipy.optimize import rosen
>>> from cobyqa import minimize

To solve the problem using COBYQA, run:

>>> x0 = [1.3, 0.7, 0.8, 1.9, 1.2]
>>> res = minimize(rosen, x0)
>>> res.x
array([1., 1., 1., 1., 1.])

To see how bound and linear constraints are handled using minimize, we solve Example 16.4 of [1], defined as

\[\begin{split}\begin{aligned} \min_{x \in \mathbb{R}^2} & \quad (x_1 - 1)^2 + (x_2 - 2.5)^2\\ \text{s.t.} & \quad -x_1 + 2x_2 \le 2,\\ & \quad x_1 + 2x_2 \le 6,\\ & \quad x_1 - 2x_2 \le 2,\\ & \quad x_1 \ge 0,\\ & \quad x_2 \ge 0. \end{aligned}\end{split}\]

Its objective function can be implemented as:

>>> def fun(x):
...     return (x[0] - 1.0) ** 2.0 + (x[1] - 2.5) ** 2.0

This problem can be solved using minimize as:

>>> x0 = [2.0, 0.0]
>>> xl = [0.0, 0.0]
>>> aub = [[-1.0, 2.0], [1.0, 2.0], [1.0, -2.0]]
>>> bub = [2.0, 6.0, 2.0]
>>> res = minimize(fun, x0, xl=xl, aub=aub, bub=bub)
>>> res.x
array([1.4, 1.7])

Finally, to see how nonlinear constraints are handled, we solve Problem (F) of [2], defined as

\[\begin{split}\begin{aligned} \min_{x \in \mathbb{R}^2} & \quad -x_1 - x_2\\ \text{s.t.} & \quad x_1^2 - x_2 \le 0,\\ & \quad x_1^2 + x_2^2 \le 1. \end{aligned}\end{split}\]

Its objective and constraint functions can be implemented as:

>>> def fun(x):
...     return -x[0] - x[1]
>>>
>>> def cub(x):
...     return [x[0] ** 2.0 - x[1], x[0] ** 2.0 + x[1] ** 2.0 - 1.0]

This problem can be solved using minimize as:

>>> x0 = [1.0, 1.0]
>>> res = minimize(fun, x0, cub=cub)
>>> res.x
array([0.707, 0.707])