alpaqa 1.0.0a15
Nonconvex constrained optimization
Loading...
Searching...
No Matches
Problem formulations

General NLP formulation

Most alpaqa solvers deal with problems in the following form:

\[ \begin{equation}\tag{P}\label{eq:problem_main} \begin{aligned} & \underset{x}{\text{minimize}} & & f(x) &&&& f : \Rn \rightarrow \R \\ & \text{subject to} & & \underline{x} \le x \le \overline{x} \\ &&& \underline{z} \le g(x) \le \overline{z} &&&& g : \Rn \rightarrow \Rm \end{aligned} \end{equation} \]

\( f \) is called the cost or objective function, and \( g \) is the constraints function.

The solver needs to be able to evaluate the following required functions and derivatives:

  • eval_f \( : \Rn \to \R : x \mapsto f(x) \) (objective function)
  • eval_grad_f \( : \Rn \to \Rn : x \mapsto \nabla f(x) \) (gradient of the objective)
  • eval_g \( : \Rn \to \Rm : x \mapsto g(x) \) (constraint function)
  • eval_grad_g_prod \( : \Rn \times \Rm \to \Rn : (x, y) \mapsto \nabla g(x)\, y \) (gradient-vector product of the constraints)

Usually, automatic differentiation (AD) is used to evaluate the gradients and gradient-vector products. Many AD software packages are available, see e.g. https://autodiff.org/ for an overview.

Additionally, the solver needs to be able to project onto the rectangular sets

\[ \begin{equation} \begin{aligned} C &\;\defeq\; \defset{x \in \Rn}{\underline{x} \le x \le \overline{x}}, \\ D &\;\defeq\; \defset{z \in \Rm}{\underline{z} \le z \le \overline{z}}. \end{aligned} \end{equation} \]

Problem API

The alpaqa solvers access the problem functions through the API outlined in alpaqa::TypeErasedProblem.
Usually, problems are defined using C++ structs, providing the evaluations described above as public member functions. These problem structs are structurally typed, which means that they only need to provide member functions with the correct names and signatures. Inheriting from a common base class is not required.

As an example, the following struct defines a problem that can be passed to the alpaqa solvers. Detailed descriptions of each function can be found in the alpaqa::TypeErasedProblem documentation.

struct RosenbrockProblem {
// Problem dimensions
length_t get_n() const; // number of unknowns
length_t get_m() const; // number of general constraints
// Cost
real_t eval_f(crvec x) const;
// Gradient of cost
void eval_grad_f(crvec x, rvec grad_fx) const;
// Constraints
void eval_g(crvec x, rvec gx) const;
// Gradient-vector product of constraints
void eval_grad_g_prod(crvec x, crvec y, rvec grad_gxy) const;
// Proximal gradient step
real_t eval_prox_grad_step(real_t γ, crvec x, crvec grad, rvec x̂, rvec p) const;
// Projecting difference onto constraint set D
void eval_proj_diff_g(crvec z, rvec p) const;
// Projection of Lagrange multipliers
void eval_proj_multipliers(rvec y, real_t max_y) const;
};
#define USING_ALPAQA_CONFIG(Conf)
Definition config.hpp:56
Double-precision double configuration.
Definition config.hpp:135

Base classes for common use cases

Convenience classes with default implementations of some of these functions are provided for common use cases:

The user can simply inherit from these classes to inject the default implementations into their problem definition, as demonstrated in the following examples.

It is highly recommended to study the C++/CustomCppProblem/main.cpp example now to see how optimization problems can be formulated in practice, before we continue with some more specialized use cases.

Second-order derivatives

Some solvers can exploit information about the Hessian of the (augmented) Lagrangian of the problem. To use these solvers, some of the following functions are required, they should be added as member functions to your problem struct.

Matrices can be stored in a dense format, in compressed sparse column storage (CCS) format, or in sparse coordinate list format (COO). Solvers convert the input to a format that they support, so some performance could be gained by choosing the appropriate storage type, because conversions may involve sorting indices and permuting the nonzero values. See alpaqa::sparsity for details. For sparse symmetric Hessian matrices, only the upper-triangular part is stored. Dense matrices are always stored in full, even if they are symmetric. The matrix evaluation functions only overwrite the nonzero values, vectorized by column.

Some solvers do not require the full Hessian matrices, but use Hessian-vector products only, for example when using Newton-CG. These products can often be computed efficiently using automatic differentiation, at a computational cost that's not much higher than a gradient evaluation.

The TypeErasedProblem class provides functions to query which optional problem functions are available. For example, provides_eval_jac_g returns true if the problem provides an implementation for eval_jac_g. Calling an optional function that is not provided results in an alpaqa::not_implemented_error exception being thrown.

Specialized combined evaluations

In practice, the solvers do not always evaluate the functions \( f(x) \) and \( g(x) \) directly. Instead, they evaluate the Lagrangian and augmented Lagrangian functions of the problem. In many applications, such as single-shooting optimal control problems, some computations are common to the evaluation of both \( f(x) \) and \( g(x) \), and significant speedups can be achieved by providing implementations that evaluate both at the same time, or even compute the (augmented) Lagrangian directly. Similarly, when using automatic differentiation, evaluation of the gradient \( \nabla f(x) \) produces the function value \( f(x) \) as a byproduct, motivating the simultaneous evaluation of these quantities as well.

The full list of these combined evaluations can be found in the TypeErasedProblem documentation. They can be provided in the same fashion as eval_f above.

  • eval_f_grad_f: \( f(x) \) and \( \nabla f(x) \)
  • eval_f_g: \( f(x) \) and \( g(x) \)
  • eval_grad_f_grad_g_prod: \( \nabla f(x) \) and \( \nabla g(x)\,y \)
  • eval_grad_L: gradient of the Lagrangian: \( \nabla_{\!x} L(x, y) = \nabla f(x) + \nabla g(x)\,y \)
  • eval_ψ: augmented Lagrangian: \( \psi(x) \)
  • eval_grad_ψ: gradient of the augmented Lagrangian: \( \nabla \psi(x) \)
  • eval_ψ_grad_ψ: augmented Lagrangian and gradient: \( \psi(x) \) and \( \nabla \psi(x) \)

Proximal operators

In addition to standard box constraints on the variables, some solvers also allow the addition of a possibly non-smooth, proximal term to the objective.

\[ \begin{equation}\tag{P-prox}\label{eq:problem_prox} \begin{aligned} & \underset{x}{\text{minimize}} & & f(x) + h(x) &&&& f : \Rn \rightarrow \R,\;\; h : \Rn \rightarrow \overline{\R} \\ & \text{subject to} & & \underline{z} \le g(x) \le \overline{z} &&&& g : \Rn \rightarrow \Rm \end{aligned} \end{equation} \]

By selecting

\[ h(x) = \delta_C(x) \;\defeq\; \begin{cases} 0 & \text{if } x \in C \\ +\infty & \text{otherwise,} \end{cases} \]

the standard NLP formulation \( \eqref{eq:problem_main} \) is recovered.

To add a custom function \( h(x) \) to the problem formulation, it suffices to implement the eval_prox_grad_step function, which computes a forward-backward step \( p = \prox_{\gamma h}\big(x - \gamma \nabla \psi(x)\big) - x \), where the current iterate \( x \), the gradient \( \nabla \psi(x) \) and a positive step size \( \gamma \) are given, and where \( \prox_{\gamma h}(z) \;\defeq\; \argmin_x \left\{ h(x) + \tfrac{1}{2\gamma} \normsq{x - z} \right\} \) denotes the proximal operator of \( h \) with step size \( \gamma \).

Note that in general, combining an arbitrary function \( h(x) \) with the box constraints \( x \in C \) is not possible. One notable exception is the \( \ell_1 \)-norm \( h(x) = \lambda\norm{x}_1 \). This choice for \( h \), in combination with the box constraints, is supported by the alpaqa::BoxConstrProblem class, by setting the alpaqa::BoxConstrProblem::l1_reg member.

The alpaqa::prox_step utility function can be used to implement eval_prox_grad_step. See Functions and operators for details.

Dynamically loading problems

alpaqa has a dependency-free, single-header C API that can be used to define problems in a shared library that can be dynamically loaded by the solvers.

The API is defined in dl-problem.h. The main entry point of your shared object should be a function called register_alpaqa_problem that returns a struct of type alpaqa_problem_register_t. This struct contains a pointer to the problem instance, a function pointer that will be called to clean up the problem instance, and a pointer to a struct of type alpaqa_problem_functions_t, which contains function pointers to all problem functions.

Additional user-defined arguments can be passed through a void pointer parameter of the register_alpaqa_problem function.

In C++, you could register a problem like this:

using real_t = alpaqa_real_t;
/// Custom problem class to expose to alpaqa.
struct Problem {
real_t eval_f(const real_t *x_) const;
void eval_grad_f(const real_t *x_, real_t *gr_) const;
void eval_g(const real_t *x_, real_t *g_) const;
void eval_grad_g_prod(const real_t *x_, const real_t *y_, real_t *gr_) const;
void initialize_box_C(real_t *lb_, real_t *ub_) const;
void initialize_box_D(real_t *lb_, real_t *ub_) const;
std::string get_name() const { return "example problem"; }
/// Constructor initializes the problem and exposes the problem functions.
Problem(/* ... */) {
funcs.n = 3; // number of variables
funcs.m = 2; // number of constraints
funcs.eval_f = member_caller<&Problem::eval_f>();
funcs.eval_grad_f = member_caller<&Problem::eval_grad_f>();
funcs.eval_g = member_caller<&Problem::eval_g>();
funcs.eval_grad_g_prod = member_caller<&Problem::eval_grad_g_prod>();
funcs.initialize_box_C = member_caller<&Problem::initialize_box_C>();
funcs.initialize_box_D = member_caller<&Problem::initialize_box_D>();
}
};
/// Main entry point: called by the @ref alpaqa::dl::DLProblem class.
register_alpaqa_problem(void *user_data_v) noexcept try {
auto problem = std::make_unique<Problem>(/* ... */);
alpaqa::register_member_function(result, "get_name", &Problem::get_name);
result.functions = &problem->funcs;
result.instance = problem.release();
result.cleanup = [](void *instance) { delete static_cast<Problem *>(instance); };
return result;
} catch (...) {
return {.exception = new alpaqa_exception_ptr_t{std::current_exception()}};
}
double alpaqa_real_t
Definition dl-problem.h:19
void register_member_function(Result &result, std::string name, Ret(T::*member)(Args...))
Definition dl-problem.h:634
static auto member_caller()
Wrap the given member function or variable into a (possibly generic) lambda function that accepts the...
Definition dl-problem.h:704
C API providing function pointers to problem functions.
Definition dl-problem.h:157
void(* cleanup)(void *)
Pointer to the function to clean up instance.
Definition dl-problem.h:372
alpaqa_exception_ptr_t * exception
Pointer to an exception that ocurred during problem creation.
Definition dl-problem.h:378
void * instance
Owning pointer.
Definition dl-problem.h:368
alpaqa_problem_functions_t * functions
Non-owning pointer, lifetime at least as long as instance.
Definition dl-problem.h:370

A full example can be found in problems/sparse-logistic-regression.cpp. While defining the register_alpaqa_problem in C++ is usually much more ergonomic than in plain C, the latter is also supported, as demonstrated in C++/DLProblem/main.cpp.

The problem can then be loaded using the alpaqa::dl::DLProblem class, or using the alpaqa-driver command line interface. For more details, see the two examples mentioned previously.

Existing problem adapters

For interoperability with existing frameworks like CasADi and CUTEst, alpaqa provides the following problem adapters:

See also
Problems topic