SuperSCS  1.3.2
 All Data Structures Files Functions Variables Typedefs Enumerations Enumerator Macros Pages
Using SuperSCS

Table of Contents

SuperSCS in C

SuperSCS: Compiling & Linking in C

Read this section if you intend to use SuperSCS in C.

Most users are interested in use SuperSCS via its MATLAB and Python interface and via CVX/CVXPy.

To you SuperSCS in C you first need to build the source code as explained above.

This will generate the following four files in the out/ folder:

Let us give a brief example of how to use SuperSCS in C by compiling your source code (read more here)...

// FILE: superscs_test.c
#include <stdio.h>
#include <stdlib.h>
#include "scs.h"
int main(int argc, char** argv) {
ScsData * data = SCS_NULL;
ScsCone * cone = SCS_NULL;
ScsInfo * info = scs_init_info();
const char * filepath = "path/to/my_problem.yml";
scs_int status;
// load problem from file
status = scs_from_YAML(filepath, &data, &cone);
// solver options
data->stgs->do_super_scs = 1;
data->stgs->memory = 100;
data->stgs->verbose = 1;
// solve the problem
status = scs(data, cone, sol, info);
// free allocated memory
scs_free_data_cone(data, cone);
return (EXIT_SUCCESS);
}

Let us now compile and link to the static library out/libscsdir.a

# On Windows and MacOSX
gcc -Iinclude superscs_test.c \
-o superscs_test path/to/libscsindir.a \
-llapack -lblas -lm

On Linux, append -lrt, that is

# On Linux
gcc -Iinclude superscs_test.c \
-o superscs_test path/to/libscsindir.a \
-llapack -lblas -lm -lrt

In particular:

SuperSCS can be compiled to run on multiple cores using OpenMP.

In order to compile SuperSCS with OpenMP enabled, set USE_OPENMP=1 in scs.mk.

You may find a comprehensive example here.

SuperSCS in MATLAB

SuperSCS can be called directly, to solve conic problems, in MATLAB.

Two functions, namely scs_direct and scs_indirect, solve the problem with the direct and indirect methods respectively.

Here is an example:

m = 9;
n = 4;
data.A = sparse(randn(m,n));
data.b = randn(m,1);
data.c = randn(n,1);
cones.q = m;
solver_options = struct('eps', 1e-5, 'do_super_scs', 1, 'direction', 100, ...
'memory', 50, 'rho_x', .001);
[x,y,s,info] = scs_direct(data, cones, solver_options);

The user needs to provide three structures:

To use SuperSCS provide the option solver_options.do_super_scs=1.

A list of all solver options and their default parameters can be found here.

Note that the method with which SuperSCS computes the direction (e.g., the restarted Broyden method, Anderson's acceleration etc) is specified using solver_options.direction. Admissible values are numeric (not strings) and are as follows:

Functions scs_direct and scs_indirect return the primal and dual solution of the problem, \((x,s,y)\) and a structure with information about the solution such as the solver time, number of iterations, relative duality gap and more).

By default, SuperSCS does not record the progress of the algorithm. To do so, you need to set the option do_record_progress=1 in the solver options.

SuperSCS via CVX in MATLAB

SuperSCS can be used with CVX in MATLAB by simply using

cvx_solver scs

and setting the solver option do_super_scs=1 as shown below

cvx_begin
cvx_solver scs
cvx_solver_settings('eps', 1e-4,...
'do_super_scs', 1,...
'direction', 150,...
'memory', 10)
variable x(n)
minimize( 0.5*x'*Q*x + q'*x + r )
subject to:
lb <= x <= ub
F*x == f
cvx_end

Several examples are given here.

Solver info can be stored using the property dumpfile as explained here.

SuperSCS in Python

In order to use SuperSCS in Python, you first need to install the superscs module.

Then, it is straightforward to import superscs, define the problem data (matrix A as a scipy.sparse matrix and vectors b and c), the cone and call superscs.solve:

import superscs
import numpy as np
from scipy import sparse
ij = np.array([[0,1,2,3],[0,1,2,3]])
A = sparse.csc_matrix(([-1.,-1.,1.,1.], ij), (4,4))
b = np.array([0.,0.,1,1])
c = np.array([1.,1.,-1,-1])
cone = {'l': 4}
data = {'A': A, 'b':b, 'c':c}
sol = superscs.solve(data, cone, use_indirect = False, memory = 50, verbose = 2)

The last invocation takes as input arguments:

  1. the object data which is the triplet \((A,b,c)\); the problem data,
  2. the cone cone which, here, is a linear cone of dimension 4,
  3. additional optional arguments (such as the LBFGS memory)

The call returns sol; an object which return the solver status, the solution and additional information.

Here is what we see if we print out sol from the example above:

{'info': {'dobj': -1.9999059647316126,
'iter': 3,
'pobj': -1.999698352184447,
'relGap': 4.1525795644121774e-05,
'resDual': 3.168953637264187e-05,
'resInfeas': nan,
'resPri': 6.556650505320437e-05,
'resUnbdd': 1.376693966512017,
'setupTime': 0.104,
'solveTime': 0.477834,
'status': 'Solved',
'statusVal': 1},
's': array([ 0., 0., 0., 0.]),
'x': array([ 5.13892443e-05, 5.13892443e-05, 9.99900565e-01,
9.99900565e-01]),
'y': array([ 0.99995195, 0.99995195, 0.99995298, 0.99995298])}

We see that sol.status is Solved and the status code is sol.statusVal: 1, meaning that the problem has been solved.

The triplet (x,y,s) is the solution of the problem and sol.pobj and sol.dobj are the values of the primal and dual objectives.

We may also inspect the relative duality gap, primal and dual residuals, the residuals associated with the infeasibility and unboundedness of the solution and the number of iterations required to solve the problem.

SuperSCS via CVX in Python

First, let us load some necessary libraries

# Loading some libraries...
import cvxpy as cp
import numpy as np
import superscs

In order to verify that SuperSCS is properly installed and CVXPy can use it, run

cp.installed_solvers()

This will return a list of solvers and SUPERSCS should be among them.

Then, we define the problem data

# Problem data...
m = 2
n = 3
T = 10
alpha = 0.1
beta = 1.0
A = np.eye(n) + alpha*np.random.randn(n,n)
B = np.random.randn(n,m)
x_0 = beta*np.random.randn(n,1)
# Form and solve control problem...
x = cp.Variable(n, T+1)
u = cp.Variable(m, T)
states = []
for t in range(T):
cost = cp.pnorm(u[:,t], 1)
constr = [x[:,t+1] == A*x[:,t] + B*u[:,t],
cp.norm(u[:,t], 'inf') <= 1]
states.append(cp.Problem(cp.Minimize(cost), constr))

Finally, we define the optimization problem (we specify the cost function and the constraints) and we invoke the solver

# sums problem objectives and concatenates constraints...
prob = sum(states)
prob.constraints += [x[:,T] == 0, x[:,0] == x_0]
prob.solve(solver="SUPERSCS", verbose=True)