# Machine Learning: Lasso Regression¶

Lasso regression is, like ridge regression, a shrinkage method. It differs from ridge regression in its choice of penalty: lasso imposes an $$\ell_1$$ penalty on the paramters $$\beta$$. That is, lasso finds an assignment to $$\beta$$ that minimizes the function

$f(\beta) = \|X\beta - Y\|_2^2 + \lambda \|\beta\|_1,$

where $$\lambda$$ is a hyperparameter and, as usual, $$X$$ is the training data and $$Y$$ the observations. The $$\ell_1$$ penalty encourages sparsity in the learned parameters, and, as we will see, can drive many coefficients to zero. In this sense, lasso is a continuous feature selection method.

In this notebook, we show how to fit a lasso model using CVXPY, how to evaluate the model, and how to tune the hyperparameter $$\lambda$$.

## Writing the objective function¶

We can decompose the objective function as the sum of a least squares loss function and an $$\ell_1$$ regularizer.

## Generating data¶

We generate training examples and observations that are linearly related; we make the relationship sparse, and we’ll see how lasso will approximately recover it.

## Fitting the model¶

All we need to do to fit the model is create a CVXPY problem where the objective is to minimize the the objective function defined above. We make $$\lambda$$ a CVXPY parameter, so that we can use a single CVXPY problem to obtain estimates for many values of $$\lambda$$.

## Evaluating the model¶

Just as we saw for ridge regression, regularization improves generalizability.

## Regularization path and feature selection¶

As $$\lambda$$ increases, the parameters are driven to $$0$$. By $$\lambda \approx 10$$, approximately 80 percent of the coefficients are exactly zero. This parallels the fact that $$\beta^*$$ was generated such that 80 percent of its entries were zero. The features corresponding to the slowest decaying coefficients can be interpreted as the most important ones.

Qualitatively, lasso differs from ridge in that the former often drives parameters to exactly zero, whereas the latter shrinks parameters but does not usually zero them out. That is, lasso results in sparse models; ridge (usually) does not.