Degree of freedom in lme output

In a repeated measures design, how are the degrees of freedom calculated?
Namely, how is the value 114 calculated in the lme output?


#Using lme
baseline <- lme(attitude ~ 1, random = ~1|participant/drink/imagery, data = longAttitude, method = "ML")
drinkModel <- update(baseline, .~. + drink)
imageryModel <- update(drinkModel, .~. + imagery)
attitudeModel <- update(imageryModel, .~. + drink:imagery)
anova(baseline, drinkModel, imageryModel, attitudeModel)
#>               Model df      AIC      BIC    logLik   Test   L.Ratio
#> baseline          1  5 1503.590 1519.555 -746.7950                 
#> drinkModel        2  7 1498.461 1520.812 -742.2306 1 vs 2   9.12891
#> imageryModel      3  9 1350.529 1379.265 -666.2644 2 vs 3 151.93237
#> attitudeModel     4 13 1316.512 1358.020 -645.2560 3 vs 4  42.01676
#>               p-value
#> baseline             
#> drinkModel     0.0104
#> imageryModel   <.0001
#> attitudeModel  <.0001

attitudeModel <- lme(attitude ~ drink*imagery, random = ~1|participant/drink/imagery, data = longAttitude)
summary(attitudeModel)
#> Linear mixed-effects model fit by REML
#>  Data: longAttitude 
#>       AIC      BIC    logLik
#>   1309.22 1350.062 -641.6101
#> 
#> Random effects:
#>  Formula: ~1 | participant
#>          (Intercept)
#> StdDev: 0.0007071812
#> 
#>  Formula: ~1 | drink %in% participant
#>         (Intercept)
#> StdDev:    6.201306
#> 
#>  Formula: ~1 | imagery %in% drink %in% participant
#>         (Intercept)  Residual
#> StdDev:    7.404812 0.2787554
#> 
#> Fixed effects: attitude ~ drink * imagery 
#>                                                  Value Std.Error  DF
#> (Intercept)                                   7.894444 0.9726183 114
#> drinkAlcoholvsWater                           2.188889 0.6877450  38
#> drinkBeervsWine                              -1.750000 1.1912092  38
#> imageryNegativevsOther                        6.738889 0.3905443 114
#> imageryPositivevsNeutral                     -6.633333 0.6764426 114
#> drinkAlcoholvsWater:imageryNegativevsOther    0.190278 0.2761565 114
#> drinkBeervsWine:imageryNegativevsOther        3.237500 0.4783171 114
#> drinkAlcoholvsWater:imageryPositivevsNeutral  0.445833 0.4783171 114
#> drinkBeervsWine:imageryPositivevsNeutral     -0.662500 0.8284696 114
#>                                                t-value p-value
#> (Intercept)                                   8.116694  0.0000
#> drinkAlcoholvsWater                           3.182704  0.0029
#> drinkBeervsWine                              -1.469095  0.1500
#> imageryNegativevsOther                       17.255121  0.0000
#> imageryPositivevsNeutral                     -9.806203  0.0000
#> drinkAlcoholvsWater:imageryNegativevsOther    0.689021  0.4922
#> drinkBeervsWine:imageryNegativevsOther        6.768522  0.0000
#> drinkAlcoholvsWater:imageryPositivevsNeutral  0.932087  0.3533
#> drinkBeervsWine:imageryPositivevsNeutral     -0.799667  0.4256
#>  Correlation: 
#>                                              (Intr) drnkAW drnkBW imgrNO
#> drinkAlcoholvsWater                          0                          
#> drinkBeervsWine                              0      0                   
#> imageryNegativevsOther                       0      0      0            
#> imageryPositivevsNeutral                     0      0      0      0     
#> drinkAlcoholvsWater:imageryNegativevsOther   0      0      0      0     
#> drinkBeervsWine:imageryNegativevsOther       0      0      0      0     
#> drinkAlcoholvsWater:imageryPositivevsNeutral 0      0      0      0     
#> drinkBeervsWine:imageryPositivevsNeutral     0      0      0      0     
#>                                              imgrPN dAW:NO dBW:NO dAW:PN
#> drinkAlcoholvsWater                                                     
#> drinkBeervsWine                                                         
#> imageryNegativevsOther                                                  
#> imageryPositivevsNeutral                                                
#> drinkAlcoholvsWater:imageryNegativevsOther   0                          
#> drinkBeervsWine:imageryNegativevsOther       0      0                   
#> drinkAlcoholvsWater:imageryPositivevsNeutral 0      0      0            
#> drinkBeervsWine:imageryPositivevsNeutral     0      0      0      0     
#> 
#> Standardized Within-Group Residuals:
#>           Min            Q1           Med            Q3           Max 
#> -0.0819512018 -0.0196920226  0.0007259018  0.0237264898  0.0979229869 
#> 
#> Number of Observations: 180
#> Number of Groups: 
#>                         participant              drink %in% participant 
#>                                  20                                  60 
#> imagery %in% drink %in% participant 
#>                                 180



Created on 2018-07-21 by the reprex package (v0.2.0).

This is more of a question on statistics than R :slightly_smiling_face:

I'm assuming that you're using the nlme package. If you look here, then Douglas Bates, who wrote lmer, writes about "lmer, p-values and all that", where he also goes over degrees of freedom.

If this is not sufficient, then I suggest heading over to https://stats.stackexchange.com/, where you can read more and ask additional questions.

Thank you very much. I figured it was more of a math problem, and I did learned something from your recommendations.

StackExange site is very helpful too. Thanks. :wink:

The method for computing degrees of freedom that lme() uses is laid out in Pinheiro & Bates 2000, p. 91, but the most comprehensive discussion of the related controversies and pitfalls that I know of (including a case where lme() calculates the degrees of freedom incorrectly!) is at Ben Bolker's GLMM FAQ:
https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#why-doesnt-lme4-display-denominator-degrees-of-freedomp-values-what-other-options-do-i-have

2 Likes

This totally solved my confusion. Thank you so much!!! Truly appreciate it.

1 Like