Getting started#

Most solvers in this library solve minimization problems of the following form:

\[\begin{split}\newcommand\mymathbb[1] { {\rm I\mathchoice{\hspace{-2pt}}{\hspace{-2pt}} {\hspace{-1.75pt}}{\hspace{-1.7pt}}#1} } \newcommand{\Re}{\mymathbb R} \begin{aligned} & \underset{x}{\text{minimize}} & & f(x) &&&& f : \Re^n \rightarrow \Re \\ & \text{subject to} & & \underline{x} \le \phantom{g(}x\phantom{)} \le \overline{x} \\ &&& \underline{z} \le g(x) \le \overline{z} &&&& g : \Re^n \rightarrow \Re^m \end{aligned}\end{split}\]

The objective function \(f(x)\) and the constraints function \(g(x)\) should have a Lipschitz-continuous gradient.

The two types of constraints are handled differently: the box constraints on \(x\) are taken into account directly, while the general constraints \(g(x)\) are relaxed using an Augmented Lagrangian Method (ALM).

Equality constraints can be expressed by setting \(\underline{z}_i = \overline{z}_i\).

For ease of notation, the box constraints on \(x\) and \(g(x)\) are represented as the inclusion in rectangular sets \(C\) and \(D\) respectively:

\[\begin{split}\begin{aligned} & \underset{x}{\text{minimize}} & & f(x) \\ & \text{subject to} & & \phantom{g(}x\phantom{)} \in C \\ &&& g(x) \in D \end{aligned}\end{split}\]

Simple example#

Consider the following simple two-dimensional problem, with the Rosenbrock function (parametrized by \(p\)) as the cost function:

\[\begin{split}\begin{aligned} & \underset{x_1,x_2}{\text{minimize}} & & (1 - x_1)^2 + p\,(x_2 - x_1^2)^2 \\ &\text{subject to} & & \begin{aligned}[t] -0.25 &\le x_1 \le 1.5 \\ -0.5 &\le x_2 \le 2.5 \\ \end{aligned} \\ &&& \begin{aligned}[t] (x_1 - 0.5)^3 - x_2 + 1 &\le 0 \\ x_1 + x_2 - 1.5 &\le 0 \\ \end{aligned} \end{aligned}\end{split}\]

In other words,

\[\begin{split}\begin{aligned} f(x) &= (1 - x_1)^2 + p\,(x_2 - x_1^2)^2 \\ C &= [-0.25, 1.5] \times [-0.5, 2.5] \\ g(x) &= \begin{pmatrix} (x_1 - 0.5)^3 - x_2 + 1 \\ x_1 + x_2 - 1.5 \end{pmatrix} \\ D &= [-\infty, 0] \times [-\infty, 0] \end{aligned}\end{split}\]

Problem description#

The objective function and the constraints are defined as CasADi expressions with symbolic variables, and then converted into CasADi functions. The arguments of these functions are the decision variables and an optional parameter vector.

# %% Build the problem (CasADi code, independent of alpaqa)
import casadi as cs

# Make symbolic decision variables
x1, x2 = cs.SX.sym("x1"), cs.SX.sym("x2")
x = cs.vertcat(x1, x2)  # Collect decision variables into one vector
# Make a parameter symbol
p = cs.SX.sym("p")

# Objective function f and the constraints function g
f = (1 - x1) ** 2 + p * (x2 - x1**2) ** 2
g = cs.vertcat(
    (x1 - 0.5) ** 3 - x2 + 1,
    x1 + x2 - 1.5,
)

# Define the bounds
C = [-0.25, -0.5], [1.5, 2.5]  # -0.25 <= x1 <= 1.5, -0.5 <= x2 <= 2.5
D = [-cs.inf, -cs.inf], [0, 0]  #         g1 <= 0,           g2 <= 0

Next, we construct the alpaqa-specific minimization problem, using the alpaqa.pyapi.minimize.minimize() function. The gradients of the problem functions are computed using CasADi, and they are compiled as efficient C functions. All of this happens inside of the alpaqa.pyapi.minimize.MinimizationProblemDescription.compile() function, which returns an instance of alpaqa._alpaqa.float64.CasADiProblem that can later be passed to a solver.

# %% Generate and compile C code for the objective and constraints using alpaqa
from alpaqa import minimize

problem = (
    minimize(f, x)  #       Objective function f(x)
    .subject_to_box(C)  #   Box constraints x ∊ C
    .subject_to(g, D)  #    General ALM constraints g(x) ∊ D
    .with_param(p, [1])  #  Parameter with default value (can be changed later)
).compile()

Numerical values of the problem (like the bounds and the parameters) can be specified when generating the problem, or can be modified after loading it:

# You can change the bounds and parameters after loading the problem
problem.param = [10.0]
problem.D.lowerbound[1] = -1e20

Selecting a solver#

The solvers in this package consist of an inner solver that can handle box constraints, such as PANOC, and an outer ALM solver that relaxes the general constraints \(g(x) \in D\). Solvers can be composed easily, for instance:

# %% Build a solver with the default parameters
import alpaqa as pa

inner_solver = pa.PANOCSolver()
solver = pa.ALMSolver(inner_solver)

Each solver has its own set of optional parameters that can be specified using a dictionary, for example:

# %% Build a solver with custom parameters

inner_solver = pa.PANOCSolver(
    panoc_params={
        'max_iter': 1000,
        'stop_crit': pa.PANOCStopCrit.FPRNorm,
    },
    lbfgs_params={
        'memory': 10,
    },
)
solver = pa.ALMSolver(
    alm_params={
        'tolerance': 1e-10,
        'dual_tolerance': 1e-10,
        'initial_penalty': 50,
        'penalty_update_factor': 20,
    },
    inner_solver=inner_solver,
)

For a full overview and description of all parameters, see the Parameters page. For example, you can find the documentation for alpaqa::PANOCParams, alpaqa::LBFGSParams, and alpaqa::ALMParams.

Some inner solvers can be configured with alternative fast directions. For example, the PANOC solver uses the alpaqa::StructuredLBFGSDirection by default, but can also make use of e.g. alpaqa::LBFGSDirection or alpaqa::AndersonDirection.

# %% Build a solver with alternative fast directions

direction = pa.LBFGSDirection({'memory': 10})
inner_solver = pa.PANOCSolver({"stop_crit": pa.FPRNorm}, direction)
solver = pa.ALMSolver(
    {
        'tolerance': 1e-10,
        'dual_tolerance': 1e-10,
        'initial_penalty': 50,
        'penalty_update_factor': 20,
    },
    inner_solver,
)
Different solver classes Different solver classes

Solving the problem#

Finally, you can obtain a solution by passing the problem specification to the solver. This returns the local minimizer \(x_\mathrm{sol}\), the corresponding Lagrange multipliers \(y_\mathrm{sol}\) of the general constraints \(g(x) \in D\), and a dictionary containing solver statistics.

# %% Compute a solution

x_sol, y_sol, stats = solver(problem)

Optionally, you can supply an initial guess for both the decision variables \(x\) and the Lagrange multipliers \(y\). If no initial guess is specified, the default initial values for x0 and y0 are zero.

# %% Compute a solution

# Set initial guesses at arbitrary values
x0 = [0.1, 1.8]  # decision variables
y0 = [0.0, 0.0]  # Lagrange multipliers for g(x)

# Solve the problem
x_sol, y_sol, stats = solver(problem, x0, y0)

# Print the results
print(stats["status"])
print(f"Solution:      {x_sol}")
print(f"Multipliers:   {y_sol}")
print(f"Cost:          {problem.eval_f(x_sol):.5f}")

This will print something similar to:

SolverStatus.Converged
Solution:      [-0.25     0.57813]
Multipliers:   [10.3125  0.    ]
Cost:          4.22119

It is highly recommended to always check the solver status in stats["status"] to make sure that it actually converged. The stats variable contains some other solver statistics as well, for both the outer and the inner solver. You can find a full overview in the documentation of alpaqa::ALMSolver::Stats and alpaqa::InnerStatsAccumulator<PANOCStats>.

Contour plot of the result

A contour plot of the objective function, with the constraints shown in black, and the iterates produced by the solver in red. The red circle indicates the optimal solution.#