Least Absolute Shrinkage and Selection Operator (LASSO) performs regularization and variable selection on a given model. Depending on the size of the penalty term, LASSO shrinks less relevant predictors to (possibly) zero. Thus, it enables us to consider a more parsimonious model. In this exercise set we will use the
glmnet package (package description: here) to implement LASSO regression in R.
Answers to the exercises are available here.
lars package and the
diabetes dataset (Efron, Hastie, Johnstone and Tibshirani (2003) “Least Angle Regression” Annals of Statistics). This has patient level data on the progression of diabetes. Next, load the
glmnet package that will be used to implement LASSO.
The dataset has three matrices
x has a smaller set of independent variables,
x2 contains the full set with quadratic and interaction terms.
y is the dependent variable which is a quantitative measure of the progression of diabetes.
It is a good idea to visually inspect the relationship of each of the predictors with the dependent variable. Generate separate scatterplots with the line of best fit for all the predictors in
y on the vertical axis. Use a loop to automate the process.
y on the predictors in
x using OLS. We will use this result as benchmark for comparison.
glmnet function to plot the path of each of x’s variable coefficients against the L1 norm of the beta vector. This graph indicates at which stage each coefficient shrinks to zero.
cv.glmnet function to get the cross validation curve and the value of lambda that minimizes the mean cross validation error.
Using the minimum value of lambda from the previous exercise, get the estimated beta matrix. Note that some coefficients have been shrunk to zero. This indicates which predictors are important in explaining the variation in y.
To get a more parsimonious model we can use a higher value of lambda that is within one standard error of the minimum. Use this value of lambda to get the beta coefficients. Note that more coefficients are now shrunk to zero.
As mentioned earlier,
x2 contains a wider variety of predictors. Using OLS, regress
x2 and evaluate results.
Repeat exercise-4 for the new model.
Repeat exercises 5 and 6 for the new model and see which coefficients are shrunk to zero. This is an effective way to narrow down on important predictors when there are many candidates.