[R] LME: internal workings of QR factorization --repost
Izmirlian, Grant (NIH/NCI) [E]
izmirlig at mail.nih.gov
Thu Apr 12 22:50:04 CEST 2007
Hi:
I've been reading "Computational Methods for Multilevel Modeling" by Pinheiro and Bates,
the idea of embedding the technique in my own c-level code. The basic idea is to rewrite
the joint density in a form to mimic a single least squares problem conditional upon the
variance parameters. The paper is fairly clear except that some important level of detail
is missing. For instance, when we first meet Q_(i):
/ \ / \
| Z_i X_i y_i | | R_11(i) R_10(i) c_1(i) |
| | = Q_(i) | |
| Delta 0 0 | | 0 R_00(i) c_0(i) |
\ / \ /
the text indicates that the Q-R factorization is limited to the first q columns of the
augmented matrix on the left. If one plunks the first q columns of the augmented matrix
on the left into a qr factorization, one obtains an orthogonal matrix Q that is (n_i + q) x q
and a nonsingular upper triangular matrix R that is q x q. While the text describes R as a
nonsingular upper triangular q x q, the matrix Q_(i) is described as a square (n_i + q) x (n_i + q)
orthogonal matrix. The remaining columns in the matrix to the right are defined by applying
transpose(Q_(i)) to both sides. The question is how to augment my Q which is orthogonal (n_i + q) x q
with the missing (n_i + q) x n_i portion producing the orthogonal square matrix mentioned in the text?
I tried appending the n_i x n_i identity matrix to the block diagonal, but this doesn't work as the
resulting likelihood is insensitive to the variance parameters.
Grant Izmirlian
NCI
More information about the R-help
mailing list