Elastic net, LASSO, and LARS in Python

July 26, 2012

I’m currently looking for implementations of the LASSO and Elastic Net, otherwise known as L1 and L1/L2 regularised linear regression respectively, in Python. The options seem to be scikit.learn and glmnet-python. The former offers coordinate ascent or the LARS algorithm coded in pure Python (with Numpy obviously), whereas the latter just wraps Jerome Friedman’s Fortran code from the R glmnet package.

Timing comparison

Runtime comparison between LASSO/Elastic net implementations from scikit.learn and glmnet-python. x-axis: number of features P. y-axis: time in seconds. Synthetic data with N=400, P/10 non-zero coefficients sampled from N(0,9), and 0.01 variance Gaussian noise.


As you might expect, the Fortran code is significantly faster in general, although for large P the LARS scikit.learn implementation is competitive with glmnet, presumably because the Python overhead becomes less noticeable. Unfortunately as far as I can see scikit.learn does not include a LARS implementation for the elastic net.

Matlab vs. R performance benchmarking

February 3, 2009

I’ve put a performance comparison of R and Matlab up on my website since I thinking of doing an R tutorial for the group. You can find my table of results, conclusions, and code here.

Binomial p-values for comparing pyrosequencing samples

December 10, 2008

Regarding my work on 454 pyrosequencing error rates with Professor Holmes this summer, I was recently asked about how to calculate a p-value for comparing two draws from a Binomial distribution to test the hypothesis that the number of substitutions seen in the sample is significantly greater than the number of substitutions seen in the control. There is actually no need to use the Poisson approximation, and the Binomial distribution very naturally takes care of varying coverage. I explain my approach here.

OK, Matlab isn’t that weird

December 10, 2008

So as Johan pointed out I made a pretty big mistake in my previous post because rand(10000) does not generate 10000 random numbers, it generates a 10000×10000 matrix of random numbers. Here’s the corrected code:

tic
a=binornd(1,0.9,10000,1);
toc
tic
a=rand(10000,1)<0.9;
toc
tic
for i=1:10000
    a=binornd(1,0.9);
end
toc
tic
for i=1:10000
    a=rand<0.9;
end
toc

And here are the corrected times:

Elapsed time is 0.002682 seconds.
Elapsed time is 0.000368 seconds.
Elapsed time is 0.462683 seconds.
Elapsed time is 0.000401 seconds.

This does seem a lot more reasonable. The extra overhead in calling binornd presumably explains why (2) takes a lot longer than (1). (4) is almost as fast as (2) because (4) is automatically vectorised. However, it appear the compiler does not manage to vectorise (3) because it is a load slower than (2).

Although reasonable, there still seems to be a lot of inefficiency in how the random number generator’s output is used in binornd. rand generates a random number between [0,1]. If it generated a true real from the uniform distribution on [0,1], we could construct an infinite sequence of independent binary random variables just by taking it’s binary representation. Let’s assume that it does generate a true draw from the uniform distribution up to the numerical accuracy defined by the 64-bit internal representation of a double: we then have 52 random bits (since out of the 64bits 52 define the fractional part), 52bits of information. The distribution I actually want to draw from (0 with probability 0.9 and 1 with probability 0.1) has information -0.9\log_2(0.9)-0.1\log_2(0.1)=0.469bits. When we do the rand<0.9 operation, we throw away 52-0.469 bits, which is pretty wasteful in any situation where the random number generation is critical, such as Monte Carlo simulation.

Matlab is weird

December 5, 2008

Try running this code in Matlab:

tic
a=binornd(1,0.9,10000,1);
toc
tic
a=rand(10000)<0.9;
toc
tic
for i=1:10000
    a=binornd(1,0.9);
end
toc
tic
for i=1:10000
    a=rand<0.9;
end
toc

This is the output I get:

Elapsed time is 0.001861 seconds.
Elapsed time is 3.184994 seconds.
Elapsed time is 0.455896 seconds.
Elapsed time is 0.000405 seconds.

This seems very strange. I can see that the second approach is wasteful: generating a random number between 0 and 1 contains a lot more information than the one bit the cutoff reduces it to. But why is the final loop so fast? Aren’t loops meant to be bad in Matlab? It’s reassuring that the binornd function is faster than the second approach, but even more strange that it’s then slower in the loop in the third approach!

My conclusion is that Matlab is a very strange platform, and that you should be very careful assuming one way of doing something will be the fastest. It also pushes home the point that it’s a pretty dodgy environment to use for performance testing of algorithms!

“Ridiculous stats”

December 1, 2008

So Johan recently showed me this ridiculous nature article, where they do linear regression on winning olympic 100m sprint times to conclude that by 2150 women will run faster than men. Two responses (here and here) criticize the article, but I thought I should compare to the obvious Bayesian approach: GP regression. I used Carl’s code to perform the regression in Matlab, with squared exponential covariance function, and I optimized the hyperparameters using the minimise function. The two plots below show the results.

Gaussian Process regression for winning Olympic men's 100m sprint.

Figure 1. Gaussian Process regression for winning Olympic men

Gaussian Process regression for winning Olympic men's 100m sprint.

Figure 2. Gaussian Process regression for winning Olympic women

These plots show that GP regression agrees pretty well with intuition: the data tells us nothing about what will happen past about 2030.

In this case, sample.

November 25, 2008

In this previous post I described the debate over whether to sample or collapse missing observation variables. I have now run the experiments, the results of which I will present here as promised. These experiments are based on my infinite sparse Factor Analysis model, which you can read about here, run on the gene expression dataset you can find as part of Mike West’s BFRM package here.

mv1

Figure 1. Twenty runs of a 1000 MCMC iterations of my infinite sparse Factor Analysis model with different approaches to handling the missing values.

Figure 1 shows that the uncollapsed sampler, with no added noise (red) performs best: it achieves the lowest predictive error in the shortest time. Adding noise to the missing values (like you should for a genuine Gibbs sampling scheme) for this version (green) both decreases the performance in absolute terms, and has a surprisingly detrimental effect on the run time as well (this could just be a result of the time it takes to sample noise at each iteration). The collapsed sampler performs better in absolute terms than the collapsed sampler with noise and has better run time.

Figure 2. Boxplot of prediction error after 1000 MCMC steps for each missing value approach.

Figure 2. Boxplot of prediction error after 1000 MCMC steps for each missing value approach.

Figure 2 confirms this conclusion: sampling the missing values and not adding any noise gives the best performance.

On a related note, I’ve been looking at the effect of removing the assumption of isotropic noise. This seems to be quite a reasonable thing to do, and doesn’t make the calculations much more involved at all.

Figure 3. 20 MCMC runs of 1000 iterations with isotropic vs. non-isotropic noise model.

Figure 3. 20 MCMC runs of 1000 iterations with isotropic vs. non-isotropic noise model.

Figure 4. Boxplot of predictive error with diagonal or isotropic noise model.

Figure 4. Boxplot of predictive error with diagonal or isotropic noise model.

Figures 3-4 confirm the intuitive that including a full diagonal noise model does improve the predictive performance of the model.

A Bayesian view of the Stein estimator

November 17, 2008

I’ve been attending Statistical Theory, a course in Part III Maths at Cambridge, taught by Richard Samworth. In the latest lecture, Richard showed the Stein estimator for the mean of multivariate Gaussian has lower expected loss (measured as the Euclidean distance between the true and estimated values of the mean) than the MLE for any value of \theta. From a frequentist perspective this appears completely unintuitive, but from a Bayesian perspective it appears much more reasonable.

Assume we observe vector X drawn from a multivariate normal of dimension p, with mean \theta and identity covariance matrix. The MLE of \theta is then just X, but the Stein estimator

\theta_s=\left( 1-\frac{p-2}{||X||^2} \right) X

The fact that this estimator performs better than the ML is termed shrinkage, because the estimator is shrunk towards 0.

How would a Bayesian approach this problem? First let’s put a Gaussian prior on \theta, so

\theta \sim N_p(\theta;0,\lambda^{-1}I)

where \lambda is a precision (inverse variance). In a fully Bayesian setting we would then put a Gamma prior on \lambda, but unfortunately we would then have to resort to sampling to infer the posterior over \theta. Assuming \lambda is known, then the posterior of \theta is

P(\theta|X,\lambda) \propto P(X|\theta) P(\theta|\lambda) = N_p(\theta;(1+\lambda)^{-1}X,(1+\lambda)^{-1})

Thus the expected value of \theta is
E(\theta|X)=(1+\lambda)^{-1}X

Now let’s find the MLE of \lambda. This is not ideal, but is tractable at least. To do this we’ll first integrate out \theta:

P(X|\lambda)=\int P(X|\theta) P(\theta|\lambda) d\theta
=N_p(X;0,\lambda_x^{-1}I)
where
\lambda_x=\frac{\lambda}{1+\lambda}.

An unbiased estimate of \lambda_x is given by
\lambda_x^{ML}=\frac{|X|^2}{p-1}.

Substituting for \lambda and rearranging gives
\lambda^{ML} = \left( \frac{|X|^2}{p-1}-1 \right)^{-1}

Substituting into the expression for E(\theta|X) above and rearranging gives
E(\theta|X) = \left( 1-\frac{p-1}{|X|^2} \right) X,
which is very close to the Stein estimate. I suspect that some choice of prior on \lambda would result in a MAP estimate which would give the p-2 term.

The conclusion is that an estimator which has unintuitively desirable properties in a frequentist framework, is intuitively a sensible estimator using under a Bayesian framework.

To sample or not to sample? Missing values in Gibbs sampling

November 12, 2008

I ran into an interesting debate last week. Consider the factor analysis model:

y=Gx+\epsilon

where y is an observed vector, G is the mixing matrix, x is a vector of latent variables and the last term is isotropic Gaussian noise. Assume we observe a sequence of n y’s drawn iid from the model. Then writing Y=[y_1 \dots y_n], X=[x_1 \dots x_n] and E=[\epsilon_1 \dots \epsilon_n] we have

Y=GX+E

A very naive way to infer the posterior P(G,X|Y) would be to perform Gibbs sampling: successively sample P(G|X,Y), then P(X|G,Y). Assume conjugate Gaussian priors on G and X, this becomes particularly easy. Note that the likelihood function, assuming Gaussian noise with precision \lambda_e, is

\left(\frac{\lambda_e}{2\pi}\right)^\frac{ND}{2}\exp{\left\{-\frac{\lambda_e}{2}[-tr((Y-GX)^T(Y-GX))]\right\}}

Now assume however that some of the elements of Y are missing, at random. How should we cope with this? Two, superficially quite different approaches are possible.

In the first, the simplest way, would be to consider the unobserved elements of Y as latent variables. Then in our Gibbs sampling scheme, have initialized G and X, we simply sample P(Y^u|G,X), i.e. set the unobserved elements of Y to the corresponding elements of GX+E (sampling E as noise). Then our sampling steps for G and X are the same as before. This does not change the model structure in any way, and is a completely valid Gibbs sampling scheme.

The second approach is to exclude terms involving the missing values from the likelihood function. We can achieve this algebraically by element-wise multiplication of the (Y-GX) term by a binary matrix H. Element (i,j) of H is equal to one iff element (i,j) of Y was observed. Now the likelihood function becomes:

\left(\frac{\lambda_e}{2\pi}\right)^\frac{N_o}{2}\exp{\left\{-\frac{\lambda_e}{2}[-tr(((Y-GX)\circ H)^T((Y-GX)\circ H)]\right\}}

where N_o is the number of observed elements of Y. This approach makes sampling G and X a little more tricky because H affects the covariance structure of the conditional distributions. To deal with the algebra (specifically to get rid of the troublesome Hadamard product) I use some ideas from Tom Minka’s matrix algebra notes.

Having had two pretty smart people arguing the case for both sides, we decided to give this some more thought. Both schemes are valid Gibbs samplers for the same model. The first is considerably easier to implement, but intuitively should not perform as well. Once we have a sample for the unobserved elements of Y, when we sample G and X the algorithm has no knowledge about which elements of Y are observed, so we must be introducing more noise into the algorithm than we would leaving these terms out.

Jurgen made the very good point that leaving the terms out of the likelihood function is equivalent to integrating out the unobserved variables. This leads to the intuition that the question of which approach to take is actually equivalent to the much broader question of how much to collapse a Gibbs sampler. This question has garnered a great deal of interest of late, and seems to be very much model dependent. If you have two highly correlated variables (such as G and X above) then integrating out one would seem very beneficial, since exploring the joint parameter space will be very slow otherwise, whereas Jurgen and Finale have found cases (specifically the linear Gaussian IBP model) where the uncollapsed Gibbs sampler is much faster.

I’ll report back again when I have some hard numbers to support these ideas!

Hi!

November 12, 2008

On this blog I’ll be putting up ideas I have about machine learning in general, maybe with a slight twist towards bioinformatics.


Follow

Get every new post delivered to your Inbox.