Archive for the ‘Uncategorized’ Category

Elastic net, LASSO, and LARS in Python

July 26, 2012

I’m currently looking for implementations of the LASSO and Elastic Net, otherwise known as L1 and L1/L2 regularised linear regression respectively, in Python. The options seem to be scikit.learn and glmnet-python. The former offers coordinate ascent or the LARS algorithm coded in pure Python (with Numpy obviously), whereas the latter just wraps Jerome Friedman’s Fortran code from the R glmnet package.

Timing comparison

Runtime comparison between LASSO/Elastic net implementations from scikit.learn and glmnet-python. x-axis: number of features P. y-axis: time in seconds. Synthetic data with N=400, P/10 non-zero coefficients sampled from N(0,9), and 0.01 variance Gaussian noise.


As you might expect, the Fortran code is significantly faster in general, although for large P the LARS scikit.learn implementation is competitive with glmnet, presumably because the Python overhead becomes less noticeable. Unfortunately as far as I can see scikit.learn does not include a LARS implementation for the elastic net.

Matlab vs. R performance benchmarking

February 3, 2009

I’ve put a performance comparison of R and Matlab up on my website since I thinking of doing an R tutorial for the group. You can find my table of results, conclusions, and code here.

Binomial p-values for comparing pyrosequencing samples

December 10, 2008

Regarding my work on 454 pyrosequencing error rates with Professor Holmes this summer, I was recently asked about how to calculate a p-value for comparing two draws from a Binomial distribution to test the hypothesis that the number of substitutions seen in the sample is significantly greater than the number of substitutions seen in the control. There is actually no need to use the Poisson approximation, and the Binomial distribution very naturally takes care of varying coverage. I explain my approach here.

OK, Matlab isn’t that weird

December 10, 2008

So as Johan pointed out I made a pretty big mistake in my previous post because rand(10000) does not generate 10000 random numbers, it generates a 10000×10000 matrix of random numbers. Here’s the corrected code:

tic
a=binornd(1,0.9,10000,1);
toc
tic
a=rand(10000,1)<0.9;
toc
tic
for i=1:10000
    a=binornd(1,0.9);
end
toc
tic
for i=1:10000
    a=rand<0.9;
end
toc

And here are the corrected times:

Elapsed time is 0.002682 seconds.
Elapsed time is 0.000368 seconds.
Elapsed time is 0.462683 seconds.
Elapsed time is 0.000401 seconds.

This does seem a lot more reasonable. The extra overhead in calling binornd presumably explains why (2) takes a lot longer than (1). (4) is almost as fast as (2) because (4) is automatically vectorised. However, it appear the compiler does not manage to vectorise (3) because it is a load slower than (2).

Although reasonable, there still seems to be a lot of inefficiency in how the random number generator’s output is used in binornd. rand generates a random number between [0,1]. If it generated a true real from the uniform distribution on [0,1], we could construct an infinite sequence of independent binary random variables just by taking it’s binary representation. Let’s assume that it does generate a true draw from the uniform distribution up to the numerical accuracy defined by the 64-bit internal representation of a double: we then have 52 random bits (since out of the 64bits 52 define the fractional part), 52bits of information. The distribution I actually want to draw from (0 with probability 0.9 and 1 with probability 0.1) has information -0.9\log_2(0.9)-0.1\log_2(0.1)=0.469bits. When we do the rand<0.9 operation, we throw away 52-0.469 bits, which is pretty wasteful in any situation where the random number generation is critical, such as Monte Carlo simulation.

Matlab is weird

December 5, 2008

Try running this code in Matlab:

tic
a=binornd(1,0.9,10000,1);
toc
tic
a=rand(10000)<0.9;
toc
tic
for i=1:10000
    a=binornd(1,0.9);
end
toc
tic
for i=1:10000
    a=rand<0.9;
end
toc

This is the output I get:

Elapsed time is 0.001861 seconds.
Elapsed time is 3.184994 seconds.
Elapsed time is 0.455896 seconds.
Elapsed time is 0.000405 seconds.

This seems very strange. I can see that the second approach is wasteful: generating a random number between 0 and 1 contains a lot more information than the one bit the cutoff reduces it to. But why is the final loop so fast? Aren’t loops meant to be bad in Matlab? It’s reassuring that the binornd function is faster than the second approach, but even more strange that it’s then slower in the loop in the third approach!

My conclusion is that Matlab is a very strange platform, and that you should be very careful assuming one way of doing something will be the fastest. It also pushes home the point that it’s a pretty dodgy environment to use for performance testing of algorithms!

“Ridiculous stats”

December 1, 2008

So Johan recently showed me this ridiculous nature article, where they do linear regression on winning olympic 100m sprint times to conclude that by 2150 women will run faster than men. Two responses (here and here) criticize the article, but I thought I should compare to the obvious Bayesian approach: GP regression. I used Carl’s code to perform the regression in Matlab, with squared exponential covariance function, and I optimized the hyperparameters using the minimise function. The two plots below show the results.

Gaussian Process regression for winning Olympic men's 100m sprint.

Figure 1. Gaussian Process regression for winning Olympic men

Gaussian Process regression for winning Olympic men's 100m sprint.

Figure 2. Gaussian Process regression for winning Olympic women

These plots show that GP regression agrees pretty well with intuition: the data tells us nothing about what will happen past about 2030.

In this case, sample.

November 25, 2008

In this previous post I described the debate over whether to sample or collapse missing observation variables. I have now run the experiments, the results of which I will present here as promised. These experiments are based on my infinite sparse Factor Analysis model, which you can read about here, run on the gene expression dataset you can find as part of Mike West’s BFRM package here.

mv1

Figure 1. Twenty runs of a 1000 MCMC iterations of my infinite sparse Factor Analysis model with different approaches to handling the missing values.

Figure 1 shows that the uncollapsed sampler, with no added noise (red) performs best: it achieves the lowest predictive error in the shortest time. Adding noise to the missing values (like you should for a genuine Gibbs sampling scheme) for this version (green) both decreases the performance in absolute terms, and has a surprisingly detrimental effect on the run time as well (this could just be a result of the time it takes to sample noise at each iteration). The collapsed sampler performs better in absolute terms than the collapsed sampler with noise and has better run time.

Figure 2. Boxplot of prediction error after 1000 MCMC steps for each missing value approach.

Figure 2. Boxplot of prediction error after 1000 MCMC steps for each missing value approach.

Figure 2 confirms this conclusion: sampling the missing values and not adding any noise gives the best performance.

On a related note, I’ve been looking at the effect of removing the assumption of isotropic noise. This seems to be quite a reasonable thing to do, and doesn’t make the calculations much more involved at all.

Figure 3. 20 MCMC runs of 1000 iterations with isotropic vs. non-isotropic noise model.

Figure 3. 20 MCMC runs of 1000 iterations with isotropic vs. non-isotropic noise model.

Figure 4. Boxplot of predictive error with diagonal or isotropic noise model.

Figure 4. Boxplot of predictive error with diagonal or isotropic noise model.

Figures 3-4 confirm the intuitive that including a full diagonal noise model does improve the predictive performance of the model.

Hi!

November 12, 2008

On this blog I’ll be putting up ideas I have about machine learning in general, maybe with a slight twist towards bioinformatics.