Examples¶
In the following we show how ForBES can be used to model and solve some well known optimization problems.
Support vector machines¶
SVMs can be cast as the following convex optimization problem
minimize λ2‖x‖22+m∑i=1max{0,1−bizi},subject to Ax=z
Therefore we have
x1=x, f(x)=λ2‖x‖22, g(z)=m∑i=1max{0,1−bizi}, A1=A, B=−I
Function g is called hinge loss and is provided in the library (see Functions for more information). The problem can therefore be defined as:
f = squaredNorm(lambda);
g = hingeLoss(1, b); % vector b contains the labels
out = forbes(f, g, [], [], {A, -1, zeros(m, 1)});
Sparse logistic regression¶
Consider the following problem
minimize m∑i=1log(1+exp(−bi⟨ai,x⟩))+r‖x‖1
The smooth term in this case is the logistic function, and the nonsmooth term is the ℓ1 regularization. We then have
f(x)=1mm∑i=1log(1+exp(−xi))g(x)=r‖x‖1=∑i|xi|
This problem can be defined using the functions in the library (see Functions for more information) as follows:
f = logLoss(1/m);
g = l1Norm(r);
C = diag(sparse(b))*X; % vector b contains the labels, X is the design matrix
out = forbes(f, g, [], C);