[R] Using optim() function to find MLE

Ivan Krylov |kry|ov @end|ng |rom d|@root@org
Mon Jul 29 08:50:10 CEST 2024


В Mon, 29 Jul 2024 09:52:22 +0530
Christofer Bogaso <bogaso.christofer using gmail.com> пишет:

> LL = function(b0, b1)

help(optim) documents that the function to be optimised takes a single
argument, a vector containing the parameters. Here's how your LL
function can be adapted to this interface:

LL <- function(par) {
 b0 <- par[1]
 b1 <- par[2]
 sum(apply(as.matrix(dat[, c('PurchasedProb', 'Age')]), 1,
 function(iROw) iROw['PurchasedProb'] * log( 1 / (1 + exp(-1 * (b0 + b1
 * iROw['Age'])))) + (1 - iROw['PurchasedProb']) * log(1 - 1 / (1 +
 exp(-1 * (b0 + b1 * iROw['Age']))))))
}

Furethermore, LL(c(0, 1)) results in -Inf. All the methods supported by
optim() require at least the initial parameters to result in finite
values, and L-BFGS-B requires all evaluations to be finite. You're also
maximising the function, and optim() defaults to minimisation, so you
need an additional parameter to adjust that (or rewrite the LL function
further):

result1 <- optim(
 par = c(0, 0), fn = LL, method = "L-BFGS-B",
 control = list(fnscale = -1)
)

> coef(result1)

help(optim) documents the return value of optim() as not having a
class or a $coefficients field. You can use result1$par to access the
parameters.

> Is there any way to force optim() function to use Newton-CG algorithm?

I'm assuming you mean the method documented in
<https://docs.scipy.org/doc/scipy/reference/optimize.minimize-newtoncg.html>.
optim() doesn't support the truncated (line search) Newton-CG method.
See the 'optimx' and 'nloptr' packages for an implementation of a
truncated Newton method (not necessarily exactly the same one).

-- 
Best regards,
Ivan



More information about the R-help mailing list