SuperSCS  1.3.2
 All Data Structures Files Functions Variables Typedefs Enumerations Enumerator Macros Pages
Examples in CVX MATLAB

Table of Contents

In what follows we give a few code snippets in MATLAB and compare SuperSCS with SCS.

This is a preliminary study to show that SuperSCS outperforms SCS, but a more thorough analysis is necessary.

In what follows we compare the two algorithms using five different types of problems: (i) a LASSO problem ( \(\ell_1\)-regularized least squares), (ii) a semidefinite program (SDP) and, in particular, a minimum-norm problem and an LMI-constrained problem, (iii) a logistic regression problem, (iv) a minimum \(p\)-norm problem, (v) a 2-norm-constrained minimum-norm problem and, last , (vi) a matrix completion problem involving the nuclear norm.

Note
The following benchmarks are available in /tests/matlab/.
Reported runtimes were recorded on an Intel® Core™ i5-6200U CPU @ 2.30GHz × 4 machine with 11.6 GiB RAM running 64-bit Ubuntu 14.04.
Both SuperSCS and SCS were compiled using gcc v4.8.5 and accessed via MATLAB® using a MEX interface.
Remarks
For the sake of reproducibility, we use
rng('default');
rng(1);

LASSO problem

We solve a simple LASSO problem of the form

\[ \mathrm{Minimize}_x\ \textstyle\frac{1}{2} \|Ax-b\|^2 + \mu \|x\|_1, \]

with \(x\in\mathbb{R}^n\), \(A\in\mathbb{R}^{m\times n}\).

Here \(n=10000\), \(m=2000\).

We define the problem data as follows:

s = ceil(n/10);
x_true=[randn(s,1);zeros(n-s,1)];
x_true = x_true(randperm(n));
density = 0.1;
rcA = .1;
A=sprandn(m,n,density,rcA);
b = A*x_true + 0.1*randn(m,1);
mu = 1;

Note that \(A\) is a sparse matrix with density \(10\%\) and reciprocal condition number \(0.1\).

We then formulate the problem in CVX with accuracy \(\epsilon=10^{-4}\).

We solve it using SuperSCS with restarted Broyden directions with memory \(100\) and using \(k_0=0\):

do_super_scs = 0;
tic;
cvx_begin
cvx_solver scs
cvx_solver_settings('eps', 1e-4,...
'do_super_scs',do_super_scs,...
'direction', 100,...
'memory', 100)
variable x_c(n)
minimize(0.5*sum_square(A*x_c - b) + mu*norm(x_c,1))
cvx_end
toc

Notice that we may pass any parameters to SCS using CVX's cvx_solver_settings.

SuperSCS with the above options terminates after \(169\) iterations and \(85.1\) s.

Instead, SCS requires \(3243\) iterations which corresponds to \(802\) s.

SuperSCS exhibits a speed-up of \(\times 9.4\).

Note
Any values that are not overridden using cvx_solver_settings are set to their default values.

Semidefinite Programming

Here we solve the problem

\begin{eqnarray*} &&\mathrm{Minimize}_{Z}\ \|Z-P\|_{\mathrm{fro}}\\ &&Z\geq 0\\ &&Z: \mathrm{Hermitian},\ \mathrm{Toeplitz} \end{eqnarray*}

n=800;
P = randn(n,n);
cvx_begin sdp
cvx_solver scs
cvx_solver_settings('eps', 1e-4,...
'do_super_scs',1,...
'memory', 50)
variable Z(n,n) hermitian toeplitz
dual variable Q
minimize( norm( Z - P, 'fro' ) )
Z >= 0 : Q;
cvx_end

SuperSCS solves this problem in \(218\)s ( \(103\) iterations), whereas SCS requires \(500\)s to terminate ( \(392\) iterations).

Another interesting SDP problem is that of determining a symmetric positive definite matrix \(P\) which solves

\begin{eqnarray*} &&\mathrm{Minimize}_{P}\ \mathrm{trace}(P)\\ &&P=P'\\ &&P \geq I\\ &&A'P + PA \leq -I \end{eqnarray*}

For this example we chose a (stable) matrix \(A\) with its eigenvalues uniformly logarithmically spaced between \(-10^{-1}\) and \(-10^{1}\).

n=100;
A = diag(-logspace(-0.5, 1, n));
U = orth(randn(n,n));
A = U'*A*U;
tic
cvx_begin sdp
cvx_solver scs
cvx_solver_settings('eps', 1e-3,...
'do_super_scs', 1, ...
'memory', 100);
variable P(n,n) symmetric
minimize(trace(P))
A'*P + P*A <= -eye(n)
P >= eye(n)
cvx_end
toc

SuperSCS converges after \(40.7\)s ( \(314\) iterations), whereas SCS converges after \(415\)s ( \(4748\) iterations)

Logistic Regression

Here we solve the following \(\ell_1\)-regularized logistic regression problem:

\[ \mathrm{Minimize}_w\ \lambda \|w\|_1 - \sum_{i}\log(1+\exp(a' w_i + b)) \]

density = 0.1;
p = 1000; % features
q = 10*p; % total samples
w_true = sprandn(p,1,0.2);
X_tmp = 3*sprandn(p, q, density);
ips = -w_true'*X_tmp;
ps = (exp(ips)./(1 + exp(ips)))';
labels = 2*(rand(q,1) < ps) - 1;
X_pos = X_tmp(:,labels==1);
X_neg = X_tmp(:,labels==-1);
X = [X_pos -X_neg]; % include labels with data
lam = 2;

We formulate the problem as follows:

tolerance = 1e-3;
cvx_begin
cvx_solver scs
cvx_solver_settings('eps', tolerance,...
'do_super_scs', 1,...
'ls', 5,...
'memory', 50)
variable w(p)
minimize(sum(log_sum_exp([sparse(1,q);w'*X])) + lam * norm(w,1))
cvx_end

For \(\epsilon=10^{-3}\), SuperSCS terminates after \(131\)s ( \(134\) iterations), while SCS requires \(329\)s ( \(675\) iterations).

For \(\epsilon=10^{-4}\), SuperSCS requires \(179\)s ( \(183\) iterations), whereas SCS still requires as much as \(497\)s ( \(983\) iterations).

In both cases, SuperSCS is about \(2.5\) times faster.

Minimization of p-norm

Consider the problem

\begin{eqnarray*} &&\mathrm{Minimize}_x\ \|x\|_p\\ &&Gx = f \end{eqnarray*}

This is the problem formulation in CVX

n = 2000;
m = ceil(n/4);
density = 0.1;
G = sprandn(m,n,density);
f = randn(m,1) * n * density;
pow = 1.5;
tic
cvx_begin
cvx_solver scs
cvx_solver_settings('eps', 1e-4,...
'do_super_scs', 0, ...
'memory', 50,...
'use_indirect', 0);
variable x(n)
minimize(norm(x, pow))
subject to
G*x == f
cvx_end
toc

For \(\epsilon=10^{-3}\), SuperSCS converged after \(11.3\)s ( \(1355\) iterations) , while SCS required \(39\)s ( \(4911\) iters).

For \(\epsilon=10^{-4}\), SuperSCS terminated after 17s ( \(4909\) iterations), while SCS failed to terminate within \(10^4\) iterations.

Norm-constrained minimum-norm problem

Here we solve a constrained problem of the form

\begin{eqnarray*} &&\mathrm{Minimize}_x\ \|Ax-b\|\\ &&\|Gx\| \leq 1 \end{eqnarray*}

with \(x\in\mathbb{R}^m\), \(A\in\mathbb{R}^{m\times n}\) and \(G\in\mathbb{R}^{2n\times n}\).

The problem data are:

m = 3000;
n = ceil(m/2);
A = sprandn(m,n,0.5,0.01);
b = 10*randn(m,1);
G = 2*sprandn(2*n, n, 0.1);

And we need to solve it up to an accuracy of \(\epsilon=10^{-3}\).

The problem is formulated in CVX as follows:

tic;
cvx_begin
cvx_solver scs
cvx_solver_settings('eps', 1e-3,...
'do_super_scs', 1, ...
'memory', 20);
variable x(n)
minimize( norm(A*x-b) )
subject to
norm(G*x) <= 1
cvx_end
toc

SuperSCS converges after \(37.1\)s ( \(116\) iterations).

On the other hand, SCS did not converge after \(242s\) ( \(5000\) iterations).

Note that SuperSCS can attain much higher precision; for instance, it converges with \(\epsilon=10^{-4}\) in \(41.2\)s ( \(131\) iterations) and with \(\epsilon=10^{-6}\) it converges after \(58\)s ( \(194\) iterations).

Regularized matrix completion problem

Let \(M\) be a given matrix whose elements \(\{(i,j)\}_{i\in I, j\in J}\) are missing.

Here, we formulate the matrix completion problem as a nuclear norm minimization problem.

\begin{eqnarray*} &&\mathrm{Minimize}_{X}\ \|X-M\|_{\mathrm{fro}}^2 + \lambda \|X\|_{*}\\ &&X_{i',j'} = M_{i',j'},\ \forall i'\notin I,\ j'\notin J \end{eqnarray*}

Problem formulation:

rng('default');
rng(1);
m = 500;
n = 200;
percentage_missing = 0.9;
n_nan = ceil(percentage_missing*m*n);
M = randn(m, n);
idx = randperm(m*n);
idx_nan = idx(1:n_nan);
idx_not_nan = idx(n_nan+1:end);
M(idx_nan) = nan;
lam = 1e-3;

Problem solution:

scs_options = SuperSCSConfig.andersonConfig('tolerance', 1e-4,...
'do_record_progress',1,'verbose',1,'memory',5);
cvx_begin sdp
cvx_solver scs
scs_set_options(scs_options)
variable X(m,n)
minimize (lam * norm_nuc(X) + sum_square(X(idx_not_nan)-M(idx_not_nan)))
subject to
X(idx_not_nan)==M(idx_not_nan)
cvx_end
See Also
easy configuration

SuperSCS (with Broyden directions and memory 50) converges in \(22s\) ( \(128\) iterations), whereas SCS takes \(43s\) ( \(677\) iterations).

Portfolio selection

Here we solve the following portfolio selection problem:

\begin{eqnarray*} &&\mathrm{Maximize}_z\ \mu'z - \gamma z'\Sigma z\\ &&1'z = 1\\ &&z \geq 0, \end{eqnarray*}

where \(z\) is the portfolio of assets, \(\mu\) is the vector of expected returns, \(\gamma\) is a risk-aversion parameter and \(\Sigma=F'F+D\) is the asset return covariance matrix with \(F\) being the factor loading matrix and \(D\) is a diagonal matrix called the asset-specific risk.

The problem is generated as follows:

density = 0.1;
rc = 0.5; % estimated reciprocal condition number
n = 100000;
m = 100;
mu = exp(0.01 * randn(n, 1)) - 1; % returns
D = rand(n,1) / 10; % idiosyncratic risk
F = sprandn(n, m, density, rc) / 10; % factor model
gamma = 1;
B = 1;

The problem is solved using CVX as follows:

cvx_begin
cvx_solver scs
cvx_solver_settings('eps', 1e-4,...
'do_super_scs', 1,...
'direction', 100,...
'memory', 100)
variable x(n)
maximize(mu'*x - gamma*(sum_square(F'*x) + sum_square(D.*x)))r
sum(x) == B
x >= 0
cvx_end

The above problem is solved in \(46.7\)s and at \(292\) iterations with SuperSCS running with restarted Broyden directions with memory equal to \(100\).

The respective results for SCS are \(157\)s and \(3050\) iterations (approximately \(3.3\) times slower).

For a lower tolerance of \(\epsilon=10^{-3}\) , SuperSCS with memory equal to \(100\) terminates at \(162\) iterations after \(23.5\)s while SCS converges after \(43.9\)s at \(787\) iterations.

Principal component analysis

Here we solve the following sparse PCA problem (using an \(\ell_1\)-regularization).

The problem has the form

\begin{eqnarray*} &&\mathrm{Maximize}_X\ \mathrm{trace}(SX) - \lambda \|X\|_1\\ && \mathrm{trace}(X) = 1\\ && X = X' \succeq 0 \end{eqnarray*}

d = 200;
p = 10;
A = sprandn(p,d,0.3,0.0001);
S = full(A'*A);
lam = 2;
cvx_begin sdp
cvx_solver scs
cvx_solver_settings('eps', 1e-3,...
'verbose', 1,...
'do_super_scs', 0, ...
'direction', 100, ...
'memory', 100);
variable X(d,d) symmetric
minimize(-trace(S*X) + lam*norm(X,1))
trace(X)==1
X>=0
cvx_end

SuperSCS: \(7.6\)s, \(95\) iterations

SCS: \(65\)s, \(2291\) iterations

Speed-up: \(\times 8.6\)

Easy configuration

Instead of using cvx_solver_settings to provide the SuperSCS configuration options, you may use scs_set_options to which you may pass a SuperSCSConfig object which can be constructed from the namesake factory class.

Here is an example:

solver_options = SuperSCSConfig.andersonConfig();
cvx_begin
cvx_solver scs
scs_set_options(solver_options)
% describe your problem here
cvx_end

Ready-to-use sets of configuration options are the following

You may use SuperSCSConfig and override some of the default parameters as follows

solver_options = SuperSCSConfig.andersonConfig('memory', 12, 'tolerance', 1e-4);
See Also
SuperSCS: first steps
SuperSCS in C: examples