[R] Testing optimization solvers with equality constraints
J C Nash
pro|jcn@@h @end|ng |rom gm@||@com
Sat May 22 03:55:24 CEST 2021
I might (and that could be a stretch) be expert in unconstrained problems,
but I've nowhere near HWB's experience in constrained ones.
My main reason for wanting gradients is to know when I'm at a solution.
In practice for getting to the solution, I've often found secant methods
work faster, though that is not universal nor even "mostly", but more
frequently than my intuition suggests.
Best, JN
On 2021-05-21 3:31 p.m., Mark Leeds wrote:
> Hi Hans: I can't help as far as the projection of the gradient onto the
> constraint but it may give insight just to see what the value of
> the gradient itself is when the optimization stops.
>
> John Nash ( definitely one of THE expeRts when it comes to optimization in
> R )
> often strongly recommends to supply gradients so I'm not sure what those
> functions are
> doing that don't allow it as an argument. I guess some numerical
> approximation.
>
> Hopefully John or Ravi will chime in with their expertise when they see
> this posting.
>
>
> Mark
>
> P.S: You may want to try Rvminb. John wrote that one and it allows for
> constraints ( I remember it
> working nicely for me when I had problems with some other ones but I don't
> remember which ones ) but
> I'm not certain whether it can handle equalities.
>
>
>
>
>
>
> On Fri, May 21, 2021 at 2:06 PM Hans W <hwborchers using gmail.com> wrote:
>
>> Mark, you're right, and it's a bit embarrassing as I thought I had
>> looked at it closely enough.
>>
>> This solves the problem for 'alabama::auglag()' in both cases, but NOT for
>>
>> * NlcOptim::solnl -- with x0
>> * nloptr::auglag -- both x0, x1
>> * Rsolnp::solnp -- with x0
>> * Rdonlp::donlp2 -- with x0
>>
>> as for these solver calls the gradient function g was *not* used.
>>
>> Actually, 'solnl()' and 'solnp()' do not allow a gradient argument,
>> 'nloptr::auglag()' says it does not use a supplied gradient, and
>> 'donlp2' again does not provide it.
>> Gradients, if needed, are computed internally which in most cases is
>> sufficient, anyway.
>>
>> So the question remains:
>> Is the fact that the projection of the gradient onto the constraint is
>> zero, is this the reason for the solvers not finding the minimum?
>>
>> And how to avoid this? Except, maybe, checking the gradient for all
>> the given constraints
>>
>> Thanks --HW
>>
>>
>>
>> On Fri, 21 May 2021 at 17:58, Mark Leeds <markleeds2 using gmail.com> wrote:
>>>
>>> Hi Hans: I think that you are missing minus signs in the 2nd and 3rd
>> elements of your gradient.
>>> Also, I don't know how all of the optimixation functions work as far as
>> their arguments but it's best to supply
>>> the gradient when possible. I hope it helps.
>>>
>>
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help using r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
More information about the R-help
mailing list