Testing Random Effects in Linear Mixed Models: Another Look at the F-test (with discussion)
Francis K. C. Hui1, Samuel Muller2, and Alan H. Welsh1
1. Mathematical Sciences Institute, Australian National University, Canberra
2. School of Mathematics and Statistics, University of Sydney
In many applications of mixed models, an important step is assessing the significance of all or a subset of the random effects included in the model. There has been extensive research conducted into testing the significance of random effects (or variance components) in linear mixed models, from correcting the asymptotic null distribution to simulation based methods and variations thereof.
This talk re-examines one of the earliest and simplest methods of random effects testing in linear mixed models, namely the F-test based on linear combinations of the responses, or FLC test. For current statistical practice, we argue that the FLC test is underused and should be given more consideration especially as an initial or “gateway” test for linear mixed models. We present a very general derivation of the FLC test that is applicable to a broad class of models where the random effects and/or normally distributed errors can be correlated. We discuss three advantages of the FLC test often overlooked in modern applications of linear mixed models: computation speed, generality, and its exactness as a test. Empirical studies provide new insight into the finite sample performance of the FLC test, identifying cases where it is competitive or even outperforms modern methods in terms of power, as well as settings where it performs worse than simulation based methods for testing random effects, all the while being faster to compute.
While straightforward to understand and implement, the FLC test stimulates deeper thinking into the notion of treating random effects as fixed effects, and its implications on estimation and inference in mixed models more generally. In the latter portion of this talk, we make connections between the principle behind the FLC test and estimation approaches such as penalized quasi-likelihood and variational approximations, as well as model selection techniques such as information criteria, penalized likelihood methods, and the concept of degrees of freedom. Ultimately, we hope to motivate future research into methods that, in some way, regard random effects as fixed, what happens to these methods asymptotically, and towards more empirical comparisons with standard approaches that treat the random effects as they are.
Francis Hui is a lecturer in statistics at the Mathematical Sciences Institute, Australian National University (ANU). Francis completed his PhD in ecological statistics in 2014 at the University of New South Wales, and afterwards undertook a postdoctoral fellowship at the ANU supervised by Alan Welsh (ANU) and Samuel Mueller (University of Sydney). He has been unable to leave the glorified country town that is Canberra since. Francis’ research spans a mixture of methodological, computational, and applied statistics, including but not limited to dimension reduction and variable selection, longitudinal and correlated data analysis, approximate likelihood estimation and inference, and semiparametric regression. Much of his application is driven by multi-species distribution modeling in community ecology, but more recently he has dabbled in longitudinal modeling for mental health and income. All of his research is complemented by copious amounts of tea drinking and unhealthy amounts of anime watching.
This plenary address will be delivered in AH1 on Friday 30 November at 10:00.