merTools
A package for getting the most out of large multilevel models in R
by Jared E. Knowles and Carl Frederick
Working with generalized linear mixed models (GLMM) and linear mixed
models (LMM) has become increasingly easy with advances in the lme4
package. As we have found ourselves using these models more and more
within our work, we, the authors, have developed a set of tools for
simplifying and speeding up common tasks for interacting with merMod
objects from lme4
. This package provides those tools.
Installation
# development version
library(devtools)
install_github("jknowles/merTools")
# CRAN version
install.packages("merTools")
Recent Updates
merTools 0.6.1 (Spring 2023)
- Maintenance release to keep package listed on CRAN
- Fix a small bug where parallel code path is run twice (#126)
- Update plotting functions to avoid deprecated
aes_string()
calls (#127) - Fix (#115) in description
- Speed up PI using @bbolker pull request (#120)
- Updated package maintainer contact information
merTools 0.5.0
New Features
subBoot
now works withglmerMod
objects as wellreMargins
a new function that allows the user to marginalize the prediction over breaks in the distribution of random effect distributions, see?reMargins
and the newreMargins
vignette (closes #73)
Bug fixes
- Fixed an issue where known convergence errors were issuing warnings and causing the test suite to not work
- Fixed an issue where models with a random slope, no intercept, and no fixed term were unable to be predicted (#101)
- Fixed an issue with shinyMer not working with substantive fixed effects (#93)
merTools 0.4.1
New Features
- Standard errors reported by
merModList
functions now apply the Rubin correction for multiple imputation
Bug fixes
- Contribution by Alex Whitworth (@alexWhitworth) adding error checking to plotting functions
Shiny App and Demo
The easiest way to demo the features of this application is to use the bundled Shiny application which launches a number of the metrics here to aide in exploring the model. To do this:
library(merTools)
m1 <- lmer(y ~ service + lectage + studage + (1|d) + (1|s), data=InstEval)
shinyMer(m1, simData = InstEval[1:100, ]) # just try the first 100 rows of data
On the first tab, the function presents the prediction intervals for the
data selected by user which are calculated using the predictInterval
function within the package. This function calculates prediction
intervals quickly by sampling from the simulated distribution of the
fixed effect and random effect terms and combining these simulated
estimates to produce a distribution of predictions for each observation.
This allows prediction intervals to be generated from very large models
where the use of bootMer
would not be feasible computationally.
On the next tab the distribution of the fixed effect and group-level
effects is depicted on confidence interval plots. These are useful for
diagnostics and provide a way to inspect the relative magnitudes of
various parameters. This tab makes use of four related functions in
merTools
: FEsim
, plotFEsim
, REsim
and plotREsim
which are
available to be used on their own as well.
On the third tab are some convenient ways to show the influence or
magnitude of effects by leveraging the power of predictInterval
. For
each case, up to 12, in the selected data type, the user can view the
impact of changing either one of the fixed effect or one of the grouping
level terms. Using the REimpact
function, each case is simulated with
the modelโs prediction if all else was held equal, but the observation
was moved through the distribution of the fixed effect or the random
effect term. This is plotted on the scale of the dependent variable,
which allows the user to compare the magnitude of effects across
variables, and also between models on the same data.
Predicting
Standard prediction looks like so.
predict(m1, newdata = InstEval[1:10, ])
#> 1 2 3 4 5 6 7 8
#> 3.146337 3.165212 3.398499 3.114249 3.320686 3.252670 4.180897 3.845219
#> 9 10
#> 3.779337 3.331013
With predictInterval
we obtain predictions that are more like the
standard objects produced by lm
and glm
:
predictInterval(m1, newdata = InstEval[1:10, ], n.sims = 500, level = 0.9,
stat = 'median')
#> fit upr lwr
#> 1 3.107632 5.190402 0.9368052
#> 2 3.097741 4.879558 1.2738180
#> 3 3.401938 5.503266 1.3132856
#> 4 3.121414 5.060498 1.1299338
#> 5 3.290234 5.422217 1.2647681
#> 6 3.146418 5.023299 1.4372531
#> 7 4.086394 6.158931 1.9473926
#> 8 3.738121 5.631336 1.7886288
#> 9 3.763437 5.734384 1.6697661
#> 10 3.352128 5.337015 1.2505294
Note that predictInterval
is slower because it is computing
simulations. It can also return all of the simulated yhat
values as an
attribute to the predict object itself.
predictInterval
uses the sim
function from the arm
package heavily
to draw the distributions of the parameters of the model. It then
combines these simulated values to create a distribution of the yhat
for each observation.
Inspecting the Prediction Components
We can also explore the components of the prediction interval by asking
predictInterval
to return specific components of the prediction
interval.
predictInterval(m1, newdata = InstEval[1:10, ], n.sims = 200, level = 0.9,
stat = 'median', which = "all")
#> effect fit upr lwr obs
#> 1 combined 3.25969862 5.093473 1.088987 1
#> 2 combined 3.25185416 5.195660 1.212398 2
#> 3 combined 3.48995449 5.513208 1.432098 3
#> 4 combined 3.33674443 4.943025 1.220998 4
#> 5 combined 3.29898222 5.594381 1.301636 5
#> 6 combined 3.22542573 5.121188 1.285540 6
#> 7 combined 4.35693740 6.456601 2.095860 7
#> 8 combined 3.77082296 5.755422 1.540146 8
#> 9 combined 3.77734423 6.131939 2.157273 9
#> 10 combined 3.28954595 5.079436 1.221874 10
#> 11 s -0.08603730 2.203910 -1.704490 1
#> 12 s 0.22623596 1.850055 -1.624032 2
#> 13 s 0.30822074 1.894273 -1.964165 3
#> 14 s 0.22535767 2.270494 -1.856316 4
#> 15 s -0.14243952 1.787597 -2.050745 5
#> 16 s -0.30163547 1.817632 -2.103720 6
#> 17 s 0.38506882 2.098101 -1.341332 7
#> 18 s 0.44294445 2.415432 -1.500022 8
#> 19 s 0.40448327 2.399836 -1.913328 9
#> 20 s 0.22839660 2.570579 -1.745981 10
#> 21 d -0.31888163 1.606477 -2.297953 1
#> 22 d -0.37366911 1.555461 -2.075957 2
#> 23 d -0.16054175 1.715221 -2.203618 3
#> 24 d -0.20694151 1.876587 -2.340907 4
#> 25 d 0.11129869 2.016248 -1.774396 5
#> 26 d -0.05587943 1.782312 -2.069027 6
#> 27 d 0.56077534 2.799003 -1.288401 7
#> 28 d 0.19538590 2.276306 -1.843640 8
#> 29 d 0.26885661 2.163566 -1.712320 9
#> 30 d -0.26743035 1.520186 -2.388651 10
#> 31 fixed 3.51054377 5.293941 1.341126 1
#> 32 fixed 3.26108728 4.927388 1.169438 2
#> 33 fixed 2.97127880 4.866901 1.215838 3
#> 34 fixed 3.03712179 5.036753 1.154742 4
#> 35 fixed 3.39117433 5.284877 1.437624 5
#> 36 fixed 3.32728732 5.126942 1.577967 6
#> 37 fixed 3.34239946 5.172533 1.513171 7
#> 38 fixed 3.35180632 5.118299 1.431649 8
#> 39 fixed 3.58880671 5.105028 1.554977 9
#> 40 fixed 3.27099171 5.169194 1.489591 10
This can lead to some useful plotting:
library(ggplot2)
#> Warning: package 'ggplot2' was built under R version 4.2.2
plotdf <- predictInterval(m1, newdata = InstEval[1:10, ], n.sims = 2000,
level = 0.9, stat = 'median', which = "all",
include.resid.var = FALSE)
plotdfb <- predictInterval(m1, newdata = InstEval[1:10, ], n.sims = 2000,
level = 0.9, stat = 'median', which = "all",
include.resid.var = TRUE)
plotdf <- dplyr::bind_rows(plotdf, plotdfb, .id = "residVar")
plotdf$residVar <- ifelse(plotdf$residVar == 1, "No Model Variance",
"Model Variance")
ggplot(plotdf, aes(x = obs, y = fit, ymin = lwr, ymax = upr)) +
geom_pointrange() +
geom_hline(yintercept = 0, color = I("red"), size = 1.1) +
scale_x_continuous(breaks = c(1, 10)) +
facet_grid(residVar~effect) + theme_bw()
#> Warning: Using `size` aesthetic for lines was deprecated in ggplot2 3.4.0.
#> โน Please use `linewidth` instead.
We can also investigate the makeup of the prediction for each observation.
ggplot(plotdf[plotdf$obs < 6,],
aes(x = effect, y = fit, ymin = lwr, ymax = upr)) +
geom_pointrange() +
geom_hline(yintercept = 0, color = I("red"), size = 1.1) +
facet_grid(residVar~obs) + theme_bw()
Plotting
merTools
also provides functionality for inspecting merMod
objects
visually. The easiest are getting the posterior distributions of both
fixed and random effect parameters.
feSims <- FEsim(m1, n.sims = 100)
head(feSims)
#> term mean median sd
#> 1 (Intercept) 3.22302416 3.22328224 0.01893656
#> 2 service1 -0.07353238 -0.07503196 0.01322848
#> 3 lectage.L -0.18550746 -0.18689223 0.01757622
#> 4 lectage.Q 0.02531346 0.02532793 0.01262718
#> 5 lectage.C -0.02446487 -0.02332242 0.01217267
#> 6 lectage^4 -0.02074847 -0.02171527 0.01314005
And we can also plot this:
plotFEsim(FEsim(m1, n.sims = 100), level = 0.9, stat = 'median', intercept = FALSE)
We can also quickly make caterpillar plots for the random-effect terms:
reSims <- REsim(m1, n.sims = 100)
head(reSims)
#> groupFctr groupID term mean median sd
#> 1 s 1 (Intercept) 0.15555959 0.14798520 0.3359070
#> 2 s 2 (Intercept) -0.03924940 -0.03158545 0.3216017
#> 3 s 3 (Intercept) 0.32218754 0.29815012 0.2944466
#> 4 s 4 (Intercept) 0.22220605 0.19904690 0.2887927
#> 5 s 5 (Intercept) 0.05738118 0.03711978 0.3373406
#> 6 s 6 (Intercept) 0.14302324 0.14638548 0.2363066
plotREsim(REsim(m1, n.sims = 100), stat = 'median', sd = TRUE)
Note that plotREsim
highlights group levels that have a simulated
distribution that does not overlap 0 โ these appear darker. The lighter
bars represent grouping levels that are not distinguishable from 0 in
the data.
Sometimes the random effects can be hard to interpret and not all of
them are meaningfully different from zero. To help with this merTools
provides the expectedRank
function, which provides the percentile
ranks for the observed groups in the random effect distribution taking
into account both the magnitude and uncertainty of the estimated effect
for each group.
ranks <- expectedRank(m1, groupFctr = "d")
head(ranks)
#> groupFctr groupLevel term estimate std.error ER pctER
#> 2 d 1 Intercept 0.3944919 0.08665152 835.3005 74
#> 3 d 6 Intercept -0.4428949 0.03901988 239.5363 21
#> 4 d 7 Intercept 0.6562681 0.03717200 997.3569 88
#> 5 d 8 Intercept -0.6430680 0.02210017 138.3445 12
#> 6 d 12 Intercept 0.1902940 0.04024063 702.3410 62
#> 7 d 13 Intercept 0.2497464 0.03216255 750.0174 66
A nice features expectedRank
is that you can return the expected rank
for all factors simultaneously and use them:
ranks <- expectedRank(m1)
head(ranks)
#> groupFctr groupLevel term estimate std.error ER pctER
#> 2 s 1 Intercept 0.16732800 0.08165665 1931.570 65
#> 3 s 2 Intercept -0.04409538 0.09234250 1368.160 46
#> 4 s 3 Intercept 0.30382219 0.05204082 2309.941 78
#> 5 s 4 Intercept 0.24756175 0.06641699 2151.828 72
#> 6 s 5 Intercept 0.05232329 0.08174130 1627.693 55
#> 7 s 6 Intercept 0.10191653 0.06648394 1772.548 60
ggplot(ranks, aes(x = term, y = estimate)) +
geom_violin(fill = "gray50") + facet_wrap(~groupFctr) +
theme_bw()
Effect Simulation
It can still be difficult to interpret the results of LMM and GLMM
models, especially the relative influence of varying parameters on the
predicted outcome. This is where the REimpact
and the wiggle
functions in merTools
can be handy.
impSim <- REimpact(m1, InstEval[7, ], groupFctr = "d", breaks = 5,
n.sims = 300, level = 0.9)
#> Warning: executing %dopar% sequentially: no parallel backend registered
impSim
#> case bin AvgFit AvgFitSE nobs
#> 1 1 1 2.777249 2.863249e-04 193
#> 2 1 2 3.247372 6.770648e-05 240
#> 3 1 3 3.546850 5.480821e-05 254
#> 4 1 4 3.817535 6.531969e-05 265
#> 5 1 5 4.216303 2.176350e-04 176
The result of REimpact
shows the change in the yhat
as the case we
supplied to newdata
is moved from the first to the fifth quintile in
terms of the magnitude of the group factor coefficient. We can see here
that the individual professor effect has a strong impact on the outcome
variable. This can be shown graphically as well:
ggplot(impSim, aes(x = factor(bin), y = AvgFit, ymin = AvgFit - 1.96*AvgFitSE,
ymax = AvgFit + 1.96*AvgFitSE)) +
geom_pointrange() + theme_bw() + labs(x = "Bin of `d` term", y = "Predicted Fit")
Here the standard error is a bit different โ it is the weighted standard
error of the mean effect within the bin. It does not take into account
the variability within the effects of each observation in the bin โ
accounting for this variation will be a future addition to merTools
.
Explore Substantive Impacts
Another feature of merTools
is the ability to easily generate
hypothetical scenarios to explore the predicted outcomes of a merMod
object and understand what the model is saying in terms of the outcome
variable.
Letโs take the case where we want to explore the impact of a model with an interaction term between a category and a continuous predictor. First, we fit a model with interactions:
data(VerbAgg)
fmVA <- glmer(r2 ~ (Anger + Gender + btype + situ)^2 +
(1|id) + (1|item), family = binomial,
data = VerbAgg)
#> Warning in checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
#> Model failed to converge with max|grad| = 0.0543724 (tol = 0.002, component 1)
Now we prep the data using the draw
function in merTools
. Here we
draw the average observation from the model frame. We then wiggle
the
data by expanding the dataframe to include the same observation repeated
but with different values of the variable specified by the var
parameter. Here, we expand the dataset to all values of btype
, situ
,
and Anger
subsequently.
# Select the average case
newData <- draw(fmVA, type = "average")
newData <- wiggle(newData, varlist = "btype",
valueslist = list(unique(VerbAgg$btype)))
newData <- wiggle(newData, var = "situ",
valueslist = list(unique(VerbAgg$situ)))
newData <- wiggle(newData, var = "Anger",
valueslist = list(unique(VerbAgg$Anger)))
head(newData, 10)
#> r2 Anger Gender btype situ id item
#> 1 N 20 F curse other 5 S3WantCurse
#> 2 N 20 F scold other 5 S3WantCurse
#> 3 N 20 F shout other 5 S3WantCurse
#> 4 N 20 F curse self 5 S3WantCurse
#> 5 N 20 F scold self 5 S3WantCurse
#> 6 N 20 F shout self 5 S3WantCurse
#> 7 N 11 F curse other 5 S3WantCurse
#> 8 N 11 F scold other 5 S3WantCurse
#> 9 N 11 F shout other 5 S3WantCurse
#> 10 N 11 F curse self 5 S3WantCurse
The next step is familiar โ we simply pass this new dataset to
predictInterval
in order to generate predictions for these
counterfactuals. Then we plot the predicted values against the
continuous variable, Anger
, and facet and group on the two categorical
variables situ
and btype
respectively.
plotdf <- predictInterval(fmVA, newdata = newData, type = "probability",
stat = "median", n.sims = 1000)
plotdf <- cbind(plotdf, newData)
ggplot(plotdf, aes(y = fit, x = Anger, color = btype, group = btype)) +
geom_point() + geom_smooth(aes(color = btype), method = "lm") +
facet_wrap(~situ) + theme_bw() +
labs(y = "Predicted Probability")
#> `geom_smooth()` using formula = 'y ~ x'
Marginalizing Random Effects
# get cases
case_idx <- sample(1:nrow(VerbAgg), 10)
mfx <- REmargins(fmVA, newdata = VerbAgg[case_idx,], breaks = 4, groupFctr = "item",
type = "probability")
ggplot(mfx, aes(y = fit_combined, x = breaks, group = case)) +
geom_point() + geom_line() +
theme_bw() +
scale_y_continuous(breaks = 1:10/10, limits = c(0, 1)) +
coord_cartesian(expand = FALSE) +
labs(x = "Quartile of item random effect Intercept for term 'item'",
y = "Predicted Probability",
title = "Simulated Effect of Item Intercept on Predicted Probability for 10 Random Cases")