--- title: "SDM4 in R: Linear Regression (Chapter 7)" author: "Nicholas Horton (nhorton@amherst.edu) and Sarah McDonald" date: "June 13, 2018" output: pdf_document: fig_height: 3 fig_width: 6 html_document: fig_height: 3 fig_width: 5 word_document: fig_height: 4 fig_width: 6 --- ```{r, include = FALSE} # Don't delete this chunk if you are using the mosaic package # This loads the mosaic and dplyr packages require(mosaic) ``` ```{r, include = FALSE} # knitr settings to control how R chunks work. require(knitr) opts_chunk$set( tidy = FALSE, # display code as typed size = "small" # slightly smaller font for code ) ``` ## Introduction and background This document is intended to help describe how to undertake analyses introduced as examples in the Fourth Edition of *Stats: Data and Models* (2014) by De Veaux, Velleman, and Bock. More information about the book can be found at http://wps.aw.com/aw_deveaux_stats_series. This file as well as the associated R Markdown reproducible analysis source file used to create it can be found at http://nhorton.people.amherst.edu/sdm4. This work leverages initiatives undertaken by Project MOSAIC (http://www.mosaic-web.org), an NSF-funded effort to improve the teaching of statistics, calculus, science and computing in the undergraduate curriculum. In particular, we utilize the `mosaic` package, which was written to simplify the use of R for introductory statistics courses. A short summary of the R needed to teach introductory statistics can be found in the mosaic package vignettes (http://cran.r-project.org/web/packages/mosaic). A paper describing the mosaic approach was published in the *R Journal*: https://journal.r-project.org/archive/2017/RJ-2017-024. ## Chapter 7: Linear Regression ### Section 7.1: Least squares: the line of best fit Figure 7.2 (page 183) displays a scatterplot of the Burger King data with a superimposed regression line. ```{r message=FALSE} library(mosaic) library(readr) options(digits = 3) BK <- read_csv("http://nhorton.people.amherst.edu/sdm4/data/Burger_King_Items.csv") names(BK) gf_point(Fat ~ Protein, ylab = "Fat (gm)", xlab = "Protein (gm)", data = BK) %>% gf_lm() ``` We can calculate the residual for a particular value with 31 grams of protein. ```{r} BKmod <- lm(Fat ~ Protein, data = BK) BKfun <- makeFun(BKmod) BKfun(31) # predicted value for a item with 31 grams of protein ``` ### Section 7.2 The linear model ```{r} coef(BKmod) BKfun(0) BKfun(32) - BKfun(31) ``` ### Section 7.3 Finding the least squares line ```{r} sx <- sd(~ Protein, data = BK) sx sy <- sd(~ Fat, data = BK) sy r <- cor(Protein ~ Fat, data = BK) r # same as cor(Fat ~ Protein)! r*sy/sx coef(BKmod)[2] ``` ### Section 7.4 Regression to the mean ### Section 7.5 Examining the residuals Figure 7.5 (page 193) displays the scatterplot of residuals as a function of the amount of protein. The `msummary()` function generates a lot of output (much of which won't be familiar). ```{r} gf_point(resid(BKmod) ~ Protein, type = c("p", "r"), data = BK) %>% gf_lm() msummary(BKmod) ``` The residual standard error of 10.6 grams matches the value reported on page 194. ### Section 7.6 R-squared: variation accounted for by the model ```{r} rsquared(BKmod) ``` ### Section 7.7 Regression assumptions and conditions