[R] GLMM (lme4) vs. glmmPQL output
Prof Brian Ripley
ripley at stats.ox.ac.uk
Mon Jan 12 18:01:54 CET 2004
Although it has not been stated nor credited, this is very close to an
example in MASS4 (there seems a difference in coding). Both the dataset
and much of the alternative analyses are from the work of my student James
McBroom (and other students have contributed).
MASS4 does contain comparisons with other methods, including our
implementation of the `gold standards', numerical ML and Bayes posterior
densities with a vague prior. We have also run this example against
several other implementations and simulated from the fitted (by numerical
ML) model. All of our comparisons have suggested that glmmPQL is in the
right ball park, so once I realised the origin of the example the GLMM
results surprised me.
On Mon, 12 Jan 2004, Dieter Menne wrote:
> Goran,
>
> from my reply to a message from Douglas Bates; ">" is quoted from a mail by
> DG.
>
> > I believe the distinction is explained in the lme4 documentation but,
> > in any case, the standard errors and the approximate log-likelihood
> > for glmmPQL are from the lme model that is the last step in the
> > optimization. The corresponding quantities from GLMM are from another
> > approximation that should be more reliable.
>
> I have compared glmmPQL, glmmML, geese and GLMM, results and code see below.
> I am aware that glmmPQL uses another method to handle the problem, and
> geese (geepack) has considerable different assumptions, but the
> results are very similar. On the other hand, I had expected that glmmML
> results, if reasonable at all, should be close to GLMM. Yet they are not,
> but rather come close to the other three.
[...]
--
Brian D. Ripley, ripley at stats.ox.ac.uk
Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel: +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UK Fax: +44 1865 272595
More information about the R-help
mailing list