667
Views
0
CrossRef citations to date
0
Altmetric
Reviews of Books and Teaching Materials

A First Course in Linear Model Theory, 2nd ed.

A First Course in Linear Model Theory is an excellent graduate-level textbook that offers a comprehensive collection of results on the classical linear model. Based on the authors’ lecture notes for a two-semester course, this book serves as an introductory theoretical text for graduate students in statistics or related quantitative fields.

The book is organized into 14 chapters and can be divided into three main parts: (i) a mathematical review of essential results for linear model theory; (ii) the foundational results of the classical linear regression model; and (iii) special topics, including fixed-, random-, and mixed-effects models, generalized linear models, and regularized regression, among others.

In the mathematical review, which spans Chapters 1–3 and 5, the authors provide an extensive exposition of vector and matrix algebra, properties of matrices frequently encountered in linear models, and solutions to systems of linear equations. Additionally, they discuss crucial properties of the multivariate normal and related distributions, all of which play a critical role in statistical inference for the classical linear regression model. This approach renders the book relatively self-contained, enabling readers to acquire (or refresh) the necessary mathematical background before delving into the main chapters.

The core theory of linear models is presented in Chapters 4, and 7–9, covering hypothesis testing, confidence intervals, model diagnostics, and variable selection. Notably, the authors focus almost exclusively on the inferential theory of the classical linear regression model with fixed regressors and Gaussian homoscedastic errors. The mathematical review along with theses core chapters thus encompass all main topics for a standard one-semester graduate course in the theory of the classical linear regression model.

Following the core chapters, the remaining sections address a wide variety of topics. Chapters 10–11 discuss fixed-, random-, and mixed-effects models; Chapter 12 covers generalized linear models (GLM); and Chapters 13–14 briefly touch upon various special topics, such as multivariate models, Bayesian linear regression models, nonparametric regression, and regularized regression. Understandably, these chapters are less self-contained. As a result, these chapters are better suited as supplementary materials for special lectures on these topics, which students can further explore as needed.

There are a few areas in which the book could be expanded to include theoretical results that are beneficial for applied statistical work. In particular, the book would benefit from also covering the standard asymptotic theory of linear regression with random regressors, misspecified functional form, and heteroscedastic errors, using sandwich variance estimators and the nonparametric bootstrap (Goldberger Citation1991; Angrist and Pischke Citation2009; Wakefield Citation2013; Buja et al. Citation2019a, Citation2019b). Furthermore, many applied scientists use linear regression for causal inference. Thus, Chapter 9 would benefit from some discussion on the problem of covariate selection and omitted variable bias when using regression with the goal of estimating causal effects (Cinelli, Forney, and Pearl Citation2020; Cinelli and Hazlett Citation2020, Citation2022). These suggestions, of course, do not detract from the overall quality and effectiveness of the textbook.

In conclusion, A First Course in Linear Model Theory is an excellent graduate-level textbook that comprehensively covers the now classical linear regression model. Its well-structured organization, thorough mathematical review, and clear presentation of core concepts make it an excellent, self-contained resource for a first course in linear models, both for instructors and students. Moreover, the book offers numerous examples, several exercises (some with solutions), R code, and detailed proofs for key results, making it also a good resource for self-study.

Carlos Cinelli
Department of Statistics, University of Washington
Seattle, WA
[email protected]

References

  • Angrist, J. D., and Pischke, J.-S. (2009), Mostly Harmless Econometrics: An Empiricist’s Companion, Princeton: Princeton University Press.
  • Buja, A., Brown, L., Berk, R., George, E., Pitkin, E., Traskin, M., Zhang, K., and Zhao, L. (2019a), “Models as Approximations I,” Statistical Science, 34, 523–544. DOI: 10.1214/18-STS693.
  • Buja, A., Brown, L., Kuchibhotla, A. K., Berk, R., George, E., and Zhao, L. (2019b), “Models as Approximations II,” Statistical Science, 34, 545–565. DOI: 10.1214/18-STS694.
  • Cinelli, C., Forney, A., and Pearl, J. (2020), “A Crash Course in Good and Bad Controls,” Sociological Methods & Research. DOI: 10.1177/00491241221099552.
  • Cinelli, C., and Hazlett, C. (2020), “Making Sense of Sensitivity: Extending Omitted Variable Bias,” Journal of the Royal Statistical Society, Series B, 82, 39–67. DOI: 10.1111/rssb.12348.
  • ———(2022), “An Omitted Variable Bias Framework for Sensitivity Analysis of Instrumental Variables,” available at SSRN 4217915.
  • Goldberger, A. S. (1991), A Course in Econometrics, Cambridge, MA: Harvard University Press.
  • Wakefield, J. (2013), Bayesian and Frequentist Regression Methods (Vol. 23), New York: Springer.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.